The iphone/ipad keyboard-less design is to win more display space for the screen. The large screen provides a better user experience in viewing pictures, text, and videos. The touch screen is the main method for iOS devices to accept user input, including clicking, double-clicking, moving, and multi-touch. All these operations generate a touch event.
In Cocoa, the class that represents the touch object is UITouch. When the user's touch screen is behind the scenes, corresponding events will occur, and all related UITouch objects will be packaged in the event and processed by programs by specific objects. The UITouch object directly contains the details of the touch.
The UITouch class contains five attributes:
Window: The window in which the touch is generated. Because the window may change, the current window is not necessarily the first window.
View: view where the touch is generated. Because the view may change, the current view is not necessarily the original view.
TapCount: the Tap operation is similar to the mouse clicking operation. tapCount indicates the number of times the screen is tapped in a short time. Therefore, you can click, double-click, or click more based on tapCount.
Timestamp: the timestamp records the time when a touch event is generated or changed. The unit is seconds.
Phase: a touch event has a cycle on the screen, that is, touch start, touch point movement, touch end, and Midway cancel. Phase allows you to view the status of the current touch event in a cycle. Phase is of the UITouchPhase type, which is an enumeration type and contains
· UITouchPhaseBegan (start with touch)
· UITouchPhaseMoved (touch point movement)
· UITouchPhaseStationary (the touchpoint is not moved)
· UITouchPhaseEnded (touch end)
· UITouchPhaseCancelled (touch canceled)
The UITouch class contains the following member functions:
-(CGPoint) locationInView :( UIView *) view: the function returns a value of the CGPoint type, indicating the position of the touch on The view. The returned position is for the coordinate system of the view. If the input view parameter is null during the call, the returned touch point is in the position of the entire window.
-(CGPoint) previuslocationinview :( UIView *) view: This method records the previous coordinate value. The function returns a CGPoint value, indicating the position of the touch on the view, the returned position is the coordinate system of the view. If the input view parameter is null during the call, the returned touch point is in the position of the entire window.
When your fingers touch the screen, whether it's single point of touch or multi-point touch, the event starts until all your fingers exit the screen. During this period, all UITouch objects are included in the UIEvent event object and are distributed to the handler by the program. Events record changes in the state of all touch objects in this cycle.
As long as the screen is touched, the system will report several touch information encapsulated into the UIEvent object and send it to the program. The Management program UIApplication object will distribute the event. In general, the event will be sent to the main window and then sent to the first responder object (FirstResponder) for processing.
The concepts of the responder are described as follows:
Response object)
The response object can respond to the event and process the event. In iOS, the UIResponder class exists, which defines all the methods of the responder object. UIApplication, UIView, and other classes all inherit the UIResponder class. The controls in UIWindow and UIKit inherit the UIView class, so they also inherit the UIResponder class indirectly. The instances of these classes can be used as the responder.
First responder)
The current receiver object is called the first responder, indicating that the object is currently interacting with the user. It is the beginning of the responder chain.
Responder chain)
The responder chain represents a series of responder objects. The event is handled by the first responder object. If the first responder is not handled, the event is passed up along the responder chain and handed over to the next responder ). Generally, the first responder is a view object or its subclass object. When it is touched, the event is handled by it. If it is not processed, the event will be passed to its view controller object (if any), then its Parent View (superview) object (if any), and so on until the top-level view. Next, we will go from top view to window (UIWindow object) to program (UIApplication object ). If the entire process does not respond to this event, the event will be discarded. In general, in the responder chain, as long as the event is processed by the object, the event will stop being transmitted. However, you can determine whether to continue to pass events based on certain conditions in the view response method.
Manage event distribution
You can set the userInteractionEnabled attribute of the view to respond to a touch event. The default status is YES. If it is set to NO, the view can prevent the view from receiving and distributing touch events. In addition, events are not received when the view is hidden (setHidden: YES) or transparent (the alpha value is 0. However, this attribute is only valid for the view. If you want the entire program to respond to events in a step, you can call the beginIngnoringInteractionEvents method of UIApplication to completely stop event receipt and distribution. Use the endIngnoringInteractionEvents method to restore events that are received and distributed by programs.
If you want a view to receive multi-point touch, you need to set its multipleTouchEnabled attribute to YES. The default value is NO, that is, the view does not receive multi-point touch by default.
In the previous iOS Programming-touch Event Processing (1), I learned how to handle touch events, events, and contacts. The first object to be touched is the view, and the UIView class of the view inherits the UIRespnder class. However, to process the event, you also need to override the Event Processing function defined in the UIResponder class. The program calls corresponding processing functions based on the touch status. These functions include the following:
-(Void) touchesBegan :( NSSet *) touches withEvent :( UIEvent *) event;
-(Void) touchesMoved :( NSSet *) touches withEvent :( UIEvent *) event;
-(Void) touchesEnded :( NSSet *) touches withEvent :( UIEvent *) event;
-(Void) touchesCancelled :( NSSet *) touches withEvent :( UIEvent *) event;
When the finger contacts the screen, the touchesBegan: withEvent method is called;
When the finger moves on the screen, the touchesMoved: withEvent method is called;
When your finger leaves the screen, the touchesEnded: withEvent method is called;
When the touch is canceled (for example, the touch is interrupted by a call), the touchesCancelled: withEvent method is called. When these methods are called, they exactly correspond to the four enumerated values of the phase attribute in the UITouch class.
The above four event methods do not require full implementation during the development process. You can rewrite the specific methods as needed. The four methods have two identical parameters: touches of the NSSet type and event of the UIEvent type. Touches indicates all UITouch objects generated by the touch, and event indicates a specific event. Because UIEvent contains all the touch objects during the entire touch process, you can call the allTouches method to obtain all the touch objects in the event, or call touchesForVIew: or touchesForWindows: extracts the touch objects in a specific view or window. In these events, you can get the touch object and perform logical processing based on its location, status, and time attribute.
For example:
The code is as follows: |
Copy code |
-(Void) touchesEnded :( NSSet *) touches withEvent :( UIEvent *) event { UITouch * touch = [touches anyObject]; If (touch. tapCount = 2) { Self. view. backgroundColor = [UIColor redColor]; } } |
The preceding example shows that the background color of the current view is set based on the number of tapCount clicks after the touch finger leaves. Whether one finger or multiple fingers at a time, the tapping operation adds the tapCount of each touch object to 1, because the above example does not need to know the position or time of the specific touch object, therefore, you can directly call the touches anyObject method to obtain any touch object and then determine its tapCount value.
The detection tapCount can be placed in touchesBegan or touchesEnded, but the latter is usually accurate, because touchesEnded can ensure that all fingers have left the screen, in this way, the click action and the drag action will not be confused.
The click operation can easily lead to ambiguity. For example, after a user clicks it once, the user does not know whether the user wants to click or simply double-click it, or, after clicking it twice, you do not know whether you want to double-click or continue to click. To solve this problem, you can generally use the "latency call" function.
For example:
The code is as follows: |
Copy code |
-(Void) touchesEnded :( NSSet *) touches withEvent :( UIEvent *) event { UITouch * touch = [touches anyObject]; If (touch. tapCount = 1) { [Self defined mselector: @ selector (setBackground :) withObject: [UIColor blueColor] afterDelay: 2]; Self. view. backgroundColor = [UIColor redColor]; } } |
The above code indicates that after the first click, the background attribute of the view is not directly changed. Instead, it is changed after being set in two seconds using the optional mselector: withObject: afterDelay: method.
The code is as follows: |
Copy code |
-(Void) touchesEnded :( NSSet *) touches withEvent :( UIEvent *) event { UITouch * touch = [touches anyObject]; If (touch. tapCount = 2) { [NSObject cancelPreviousPerformRequestsWithTarget: self selector: @ selector (setBackground :) object: [UIColor redColor]; Self. view. backgroundColor = [UIColor redColor]; } } |
Double-click is the combination of two clicks. Therefore, the method for setting the background color is enabled when you click the button for the first time. When detecting double-click, you must cancel the previous method, you can call the cancelPreviousPerformRequestWithTarget: selector: object method of the NSObject class to cancel the method call of the specified object, and then double-click the corresponding method to set the background color to red.
The following example shows how to create a drag-and-drop view, which is implemented by touching the coordinates of the object. Therefore, you can call the locationInView method of the touch object.
The code is as follows: |
Copy code |
For example: CGPoint originalLocation; -(Void) touchesBegan :( NSSet *) touches withEvent :( UIEvent *) event { UITouch * touch = [touches anyObject]; OriginalLocation = [touch locationInView: self. view]; } -(Void) touchesMoved :( NSSet *) touches withEvent :( UIEvent *) event { UITouch * touch = [touches anyObject]; CGPoint currentLocation = [touch locationInView: self. view]; CGRect frame = self. view. frame; Frame. origin. x + = currentLocation. X-originalLocation.x; Frame. origin. y + = currentLocation. Y-originalLocation.y; Self. view. frame = frame; } |
In touchesBegan, use [touch locationInView: self. view] get the position of the finger touch on the current view, record it with the CGPoint variable, and then obtain the current position of the touch object in the touchesMoved method of the finger movement event, the offset is calculated based on the difference from the original position, and the position of the current view is set.