Android gesture operation sliding effect touch screen event processing

Source: Internet
Author: User

Most of the time, the use of touch screen fling, scroll and other gesture (gesture) operations will greatly improve the user experience of the application, such as rolling the screen in the browser using scroll gestures, use fling to flip pages in the reader. In Android {
Function onclick ()
{
Function onclick ()
{
Function onclick ()
{
Function onclick ()
{
Tagshow (Event)
}
}
}
}
} "> In the system, gesture recognition is performed by gesturedetector. the ongesturelistener interface is implemented. However, William did not find an example after reading the official Android documentation. touchpaint in API demo only mentions ontouch event processing and does not involve gestures. Many people in the android developer discussion group have similar issues with me. In combination with the methods they mentioned and my experiments, I will give you a brief introduction to the implementation of Gesture Recognition in Android.
First, let's clarify some concepts. First, Android's event processing mechanism is implemented based on the listener. Compared with the touch screen-related events we call today, we use ontouchlistener. Secondly, all view subclasses can add listeners for a certain type of events by using methods such as setontouchlistener () and setonkeylistener. Third, listener is generally provided in Interface (Interface) mode, which contains one or more abstract methods. We need to implement these methods to complete ontouch () and onkey. In this way, after an event listener is set for a view and the abstract method is implemented, the program can dispatch a specific event to the view, the callbakc function is used to give an appropriate response.
Let's look at a simple example and use the simplest textview to describe it (in fact, it is no different from the skeleton generated in the ADT ).

Java code
Public class gesturetest extends activity implements ontouchlistener {</P> <p> @ override <br/> protected void oncreate (bundle savedinstancestate) {<br/> super. oncreate (savedinstancestate); <br/> setcontentview (R. layout. main); </P> <p> // init textview <br/> textview TV = (textview) findviewbyid (R. id. page); </P> <p> // set ontouchlistener on textview <br/> TV. setontouchlistener (this); </P> <p> // show some text <br/> TV. settext (R. string. text); <br/>}</P> <p> @ override <br/> Public Boolean ontouch (view V, motionevent event) {<br/> toast. maketext (this, "ontouch", toast. length_short ). show (); <br/> return false; <br/>}</P> <p>

We set an ontouchlistener for the textview instance TV, because the gesturetest class implements the ontouchlistener interface, so simply give this as a parameter. The ontouch method implements the abstract method in ontouchlistener. You only need to add the logic code here to make a response when you touch the screen, just like what we do here-to make a prompt.

Here, we can use the getaction () method of motionevent to obtain the touch event type, including action_down, action_move, action_up, and action_cancel. Action_down refers to pressing the touch screen, action_move refers to moving the affected device after pressing the touch screen, action_up refers to loose touch screen, and action_cancel is not directly triggered by the user (so it is not in the scope of discussion today, see viewgroup. onintercepttouchevent (motionevent )). After obtaining coordinates using getrawx (), getrawy (), getx (), getx (), and Gety () methods based on different user operations, we can implement operations such as dragging a button, drag a scroll bar. In standby mode, you can refer to the motionevent Class documentation, or the touchpaint example.
Back to what we want to talk about today, how can we identify users' gesture when we capture touch operations? Here we need the help of the gesturedetector. ongesturelistener interface, so our gesturetest class becomes like this.

Java code
Public class gesturetest extends activity implements ontouchlistener, <br/> ongesturelistener {<br/>... <br/>}

Then, in the ontouch () method, we call the ontouchevent () method of gesturedetector and hand over the captured motionevent to gesturedetector to analyze whether there is a proper callback function to process user gestures.

Java code
@ Override <br/> Public Boolean ontouch (view V, motionevent event) {</P> <p> // ongesturelistener will analyzes the given motion event <br/> return mgesturedetector. ontouchevent (event); <br/>}

Next, we have implemented the following six Abstract METHODS: onfling (), onscroll (), and onlongpress. I have written the meaning of the gesture represented by each method in the comments. You can see it at a glance.
// You can touch the touch screen to trigger Java code by one motionevent action_down.

@ Override <br/> Public Boolean ondown (motionevent e) {</P> <p> // todo auto-generated method stub <br/> toast. maketext (this, "ondown", toast. length_short ). show (); </P> <p> return false; <br/>}</P> <p> // you can touch the touch screen, which has not been removed or dragged, triggered by one motionevent action_down <br/> // note the difference between action_down and ondown, emphasize that the status is not released or dragged </P> <p> @ override <br/> Public void onshowpress (motionevent E) {</P> <p> // todo auto-generated method stub <br/>}

The user (after touching the touch screen) is released and triggered by one motionevent action_up

@ Override </P> <p> Public Boolean onsingletapup (motionevent e) {<br/> // todo auto-generated method stub <br/> return false; <br/>}</P> <p>

After you press the touch screen and move quickly, the system is released and triggered by one motionevent action_down, multiple action_move, and one action_up.

@ Override <br/> Public Boolean onfling (motionevent E1, motionevent E2, float velocityx, </P> <p> float velocityy) {<br/> // todo auto-generated method stub <br/> return false; <br/>}</P> <p> // press the touch screen for a long time, triggered by multiple motionevent action_down <br/> @ override <br/> Public void onlongpress (motionevent E) {<br/> // todo auto-generated method stub <br/>}</P> <p> // press the touch screen and drag it. The motionevent action_down is triggered by one motionevent, multiple action_move triggers <br/> @ override <br/> Public Boolean onscroll (motionevent E1, motionevent E2, float distancex, float distancey) {<br/> // todo auto-generated method stub <br/> return false; <br/>}< br/>

Let's try to process an onfling () event. I wrote the meaning of each parameter in the onfling () method in the annotation. Note that the fling event processing code is used, in addition to the coordinates contained in the First action_down triggering fling and the last action_move, we can also use the moving speed on the X or Y axis as a condition. For example, in the following code, we can only process a user moving more than 100 pixels and moving more than 200 pixels per second on the X axis.

@ Override <br/> Public Boolean onfling (motionevent E1, motionevent E2, float velocityx, float velocityy) {</P> <p> // parameter explanation: <br/> // E1: 1st action_down motionevent <br/> // E2: Last action_move motionevent <br/> // velocityx: moving speed on the X axis, pixel/second <br/> // velocityy: The moving speed on the Y axis, pixel/second <br/> // trigger condition: <br/> // The coordinate displacement of the X axis is greater than that of fling_min_distance, and the movement speed is greater than that of fling_min_velocity pixels/second <br/> If (e1.getx ()-e2.getx ()> fling_min_distance <br/> & math. ABS (velocityx)> fling_min_velocity) {</P> <p> // fling left <br/> toast. maketext (this, "fling left", toast. length_short ). show (); <br/>}else <br/> If (e2.getx ()-e1.getx ()> fling_min_distance <br/> & math. ABS (velocityx)> fling_min_velocity) {</P> <p> // fling right <br/> toast. maketext (this, "fling right", toast. length_short ). show (); <br/>}</P> <p> return false; <br/>}

The problem is that if we try to run the program at this time, you will find that we don't get the desired results at all. If we trace the code execution, we will find that the onfling () event has never been captured. This is the problem that plagued me at the beginning. Why?
I found the answer in the discussion group's gesture detection post, that is, we need to add the following code in oncreate TV. setontouchlistener (this.
TV. setlongclickable (true );
Only in this way can the view process different from the hold (action_move, or multiple action_down) of the TAP (touch). We can also use {
Function onclick ()
{
Function onclick ()
{
Function onclick ()
{
Function onclick ()
{
Tagshow (Event)
}
}
}
}
} "> Android: longclickable.
The problem encountered this time is similar to the problem of setonkeylistener in the previous mapview. In fact, it is not comprehensive enough to understand the SDK, and it is good to remember it once. However, Google does need to improve its documentation. At least it can be stated in ongesturelistener that the conditions must be met to ensure that the gesture is correctly recognized.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.