Iphone/ipad no keyboard design for the screen for more display space, large screen in view of pictures, text, video and so on to bring users a better user experience. Touch screens are the primary way for iOS devices to accept user input, including clicking, double-clicking, Touching, and multi-touch, which can result in touch events.
In cocoa, the class that represents the touch object is Uitouch. When the user touches the screen, the corresponding event is generated, and all related Uitouch objects are wrapped in the event and processed by the program to a particular object. The Uitouch object directly includes the touch details.
The Uitouch class contains 5 properties:
Windows: The window where the touch occurs. Because the window may change, the window you are currently in is not necessarily the first window.
View: The point at which a touch is generated. Because the view may change, the current view is not necessarily the original view.
Tapcount: The tap operation is similar to the mouse click operation, and the Tapcount indicates the number of times the screen was clicked in a short time. So you can click, double-click, or more tapcount depending on the decision.
Timestamp: The timestamp records the time when the touch event was produced or changed. The unit is seconds.
Phase: Touch events have a period on the screen, i.e. touch start, touch point move, Touch end, and cancel. By phase, you can view the state of the current touch event in one cycle. Phase is the Uitouchphase type, which is an enumeration pattern, containing the
· Uitouchphasebegan (Touch start)
· uitouchphasemoved (Contact point move)
· Uitouchphasestationary (Contact point without movement)
· Uitouchphaseended (Touch over)
· Uitouchphasecancelled (Touch cancellation)
The Uitouch class contains the following member functions:
-(Cgpoint) Locationinview: (UIView *) View: The function returns a value of Cgpoint type, indicating the position of the touch in view, where the return position is for the view's coordinate system. When the view argument passed in is empty, the time the touch point is returned to the location of the entire window.
-(Cgpoint) Previouslocationinview: (UIView *) View: This method records the previous coordinate value, and the function return is also a cgpoint type of value, indicating the position of the touch in view, The position returned here is for the view's coordinate system. When the view argument passed in is empty, the time the touch point is returned to the location of the entire window.
When the finger touches the screen, whether it's a single touch or multi-touch, the event starts until all the user's fingers leave the screen. All Uitouch objects are included in the Uievent event object and are distributed by the program to the processor. Events record the changes in the state of all touch objects in this cycle.
As long as the screen is touched, the system will report a number of touch information encapsulated into the Uievent object sent to the program, the management program UIApplication object to distribute the event. Typically, events are sent to the main window and then to the first responder object (firstresponder) processing.
With regard to the concept of respondents, the following are described:
Responder objects (Response object)
Responder objects can respond to events and handle them. In iOS, there is a Uiresponder class that defines all the methods of the responder object. Classes such as UIApplication and UIView inherit the Uiresponder class, and the controls in UIWindow and Uikit inherit UIView classes indirectly, so that instances of those classes can be used as responders.
First Responder (responder)
The responder object that is currently receiving the touch is called the first responder, which means that the current object is interacting with the user, which is the beginning of the responder chain.
Responder chain (responder chain)
The responder chain represents a series of responder objects. The event is handled by the first responder object and, if the first responder does not, the event is passed up along the responder chain and handed over to the next responder (next responder). In general, the first responder is a view object or its subclass object, when it is touched, the event is handled by it, and if it is not processed, the event is passed to its View controller object (if it exists), then its parent (Superview) object (if present), and so on, until the top-level view. The next step is to follow the top view to the window (UIWindow object) and then to the program (the UIApplication object). If the entire process does not respond to this event, the event is discarded. In general, the event stops passing in the responder chain as long as the object handles the event. Sometimes, however, you can determine whether you need to continue passing events in the view's response method based on some criteria.
Manage Event Distribution
Whether the view needs to respond to a touch event can be done by setting the Userinteractionenabled property of the view. The default state is yes, and if set to No, the view can be blocked from receiving and distributing touch events. In addition, events are not collected when the view is hidden (sethidden:yes) or transparent (alpha value is 0). However, this property only works on the view, and if you want the entire program to step in response to the event, you can call the UIApplication Beginingnoringinteractionevents method to completely stop event reception and distribution. The Endingnoringinteractionevents method is used to restore the program to receive and distribute events.
If you want the view to receive multi-touch, you need to set its Multipletouchenabled property to Yes, which is no by default, that is, the view defaults to not receiving multi-touch.
After understanding the touch, event, and responder in the previous "IOS programming– Touch Event handling (1)", learn how to handle user touch events. The object first touched is the view, and the view's class UIView inherits the Uirespnder class, but to handle the event, you also need to override the event handler defined in the Uiresponder class. Depending on the touch state, the program calls the appropriate handler functions, which include the following:
-(void) Touchesbegan: (Nsset *) touches withevent: (Uievent *) event;
-(void) touchesmoved: (Nsset *) touches withevent: (Uievent *) event;
-(void) touchesended: (Nsset *) touches withevent: (Uievent *) event;
-(void) touchescancelled: (Nsset *) touches withevent: (Uievent *) event;
When the finger touches the screen, the Touchesbegan:withevent method is invoked;
When the finger is moving on the screen, the Touchesmoved:withevent method is invoked.
When the finger leaves the screen, the Touchesended:withevent method is invoked;
The Touchescancelled:withevent method is invoked when the touch is canceled (for example, when a call is interrupted during a touch). When these methods are invoked, they exactly correspond to the 4 enumerated values of the phase property in the Uitouch class.
The four event methods above do not require full implementation during the development process, and you can override specific methods as needed. For these 4 methods, there are two identical parameters: the Nsset type of touches and the uievent type of event. Where touches represents all the Uitouch objects that the touch produces, and event represents a specific event. Because uievent contains all the touch objects in the entire touch process, you can call the Alltouches method to get all the touch objects in the event. You can also call Touchesforview: or touchesforwindows: Take out a specific view or a touch object on a window. In these events, you can get the touch object and then do the logical processing according to its position, state, and time attribute.
For example:
The code is as follows
-(void) touchesended: (Nsset *) touches withevent: (Uievent *) event
{
Uitouch *touch = [touches anyobject];
if (Touch.tapcount = 2)
{
Self.view.backgroundColor = [Uicolor Redcolor];
}
}
The above example shows that the background color of the current view is set according to the number of tapcount clicks after the touch of the finger is left. Whether it's a finger or multiple fingers, the tapcount adds 1 to each touch object, because the example above does not need to know the location or time of the specific touch object, Therefore, you can call the touches Anyobject method directly to get any touch object and then judge its tapcount value.
Detection Tapcount can be placed in the Touchesbegan can also be touchesended, but generally the latter with accurate, because touchesended can ensure that all fingers have left the screen, so will not be a light action and press drag and other actions confused.
Light-click operation is very easy to cause ambiguity, for example, when the user click once, do not know whether the user is just a part of a double-click, or a point of two times did not know whether the user is to double-click or continue to click. To solve this problem, you can generally use the delay call function.
For example:
The code is as follows
-(void) touchesended: (Nsset *) touches withevent: (Uievent *) event
{
Uitouch *touch = [touches anyobject];
if (Touch.tapcount = 1)
{
[Self performselector: @selector (setbackground:) Withobject:[uicolor Bluecolor] afterdelay:2];
Self.view.backgroundColor = [Uicolor Redcolor];
}
}
The code above indicates that after the first keystroke, there is no direct change to the background property of the view, but instead the PerformSelector:withObject:afterDelay: method sets the change after 2 seconds.
The code is as follows
-(void) touchesended: (Nsset *) touches withevent: (Uievent *) event
{
Uitouch *touch = [touches anyobject];
if (Touch.tapcount = 2)
{
[NSObject cancelpreviousperformrequestswithtarget:self selector: @selector (setbackground:) Object:[uicolor RedColor ]];
Self.view.backgroundColor = [Uicolor Redcolor];
}
}
Double-click is the combination of two clicks, so the first time you click, set the background color of the method has been started, in the detection of double click before the corresponding method to cancel out, You can cancel the method call of the specified object by calling the CancelPreviousPerformRequestWithTarget:selector:object method of the NSObject class, and then call double-click the corresponding method to set the background color to red.
The following example creates a view that can be dragged, mainly by touching the coordinates of the object's position. So call the Locationinview of the Touch object: method.
The code is as follows
For example:
Cgpoint originallocation;
-(void) Touchesbegan: (Nsset *) touches withevent: (Uievent *) event
{
Uitouch *touch = [touches anyobject];
Originallocation = [Touch LocationInView:self.view];
}
-(void) touchesmoved: (Nsset *) touches withevent: (Uievent *) event
{
Uitouch *touch = [touches anyobject];
Cgpoint currentlocation = [Touch LocationInView:self.view];
CGRect frame = self.view.frame;
Frame.origin.x + = currentlocation.x-originallocation.x;
FRAME.ORIGIN.Y + = CURRENTLOCATION.Y-ORIGINALLOCATION.Y;
Self.view.frame = frame;
}
Here first in the Touchesbegan through [touch LocationInView:self.view] Get the position of the finger touching on the current view, with Cgpoint variable record, It then gets the current position of the touch object in the Finger Move event touchesmoved method, calculates the shift offset by the difference from the original position, and then sets the position of the current view.