From: http://blog.csdn.net/mengtnt/article/details/6716289
Uiviewcontroller was mentioned earlier, but uiview is also a very important layer in MVC. It is precisely because uiview is the basis of all interfaces on the iPhone that the official team has written a document "view programming guide for iOS ". Through this, you can have a good understanding of the uiview function.
Let's take a look at the explanation of the official API: The uiview class defines a rectangular area on the screen
And the interfaces for managing the content in that area.
At runtime, a view object handles the rendering of any content in its area
And also handles any interactions with that content. (uiview defines an interface for Rectangular areas and managing area content on the screen. At runtime, a view object controls rendering of the region and content interaction .). Therefore, uiview has three basic functions: Drawing and animation, managing content layout, and controlling events. It is precisely because uiview has these functions that it can play the role of the visual layer in MVC.
Uiview seems complicated. The various functional interfaces in the official Api have to be analyzed one by one using the idea of Ding, because complicated things are made up of simple things. Back to the three basic functions of the uiview mentioned earlier, we can easily separate how different functions of the uiview are combined. First, let's look at the display and animation of the most basic functions of the view. In fact, all the painting and animation interfaces of the uiview can be implemented using calayer and caanimation, that is to say, does apple encapsulate coreanimation into uiview? This document has not been mentioned and cannot be asserted. However, each uiview contains a calayer, and various animations can be added to the calayer. Again, we can see that the idea of uiview management layout is very similar to calayer. Finally, the event control function is implemented because uiview inherits uiresponder. After the above analysis, we can easily break down the nature of uiview. Uiview is equivalent to a white wall. This white wall is only responsible for displaying the items added to it.
1. calayer in uiview
Some of the geometric properties of the uiview, such as frame, bounds, and center, can be found in calayer. Therefore, if you understand the characteristics of calayer, how to display it in the natural uiview layer will be clear at a glance.
Calayer is a layer. The layer function naturally supports rendering images and playing animations. Every time you create a uiview, the system automatically creates a calayer. However, you cannot change this calayer object, but you can only modify some attributes. Therefore, by modifying calayer, you can not only modify the appearance of the uiview, but also add various animations to the uiview. Calayer is a class in the coreanimation framework. You can use the core animation programming guide to learn many calayer features. If you have mastered these features, you naturally understand how uiview is displayed and rendered.
Let's take a look at the layer explanation in the core animation framework: while there are obvious similarities between core animation layers and cocoa views the biggest
Conceptual divergence is that layers do not render directly to the screen.
Where nsview and uiview are clearly view objects in the Model-View-controller design pattern,
Core animation layers are actually model objects. They encapsulate geometry, timing and visual properties,
And they provide the content that is displayed,
But the actual display is not the layer's responsibility.
Each visible layer tree is backed by two corresponding trees: A presentation tree and a rend tree (the biggest difference between a very similar cocoa view and a core animation layer is that core animation cannot be directly rendered to the screen. Uiview and nsview are the view models in MVC, And the animation layer is more like a model object. They encapsulate the ry, time, and some visual attributes, and provide the content that can be displayed, but the actual display is not the responsibility of the layer. The background of each layer tree has two response trees: an existing tree and a rendering tree ). Therefore, it is clear that the layer encapsulates model data. Whenever you change the attributes of some model data in the layer, the former current tree will be replaced by an animation, and then the rendering tree will be responsible for rendering the image.
Since the animation layer encapsulates the geometric properties in the object model, how can we obtain these geometric properties. One way is to define attributes in layer, such as bounds, authorpoint, and frame. Secondly, core animation extends the key-Value Pair protocol, in this way, developers can easily obtain various geometric attributes in the layer through the get and set methods. The following table shows the transform key paths. For example, you can set the following parameters to convert the geometric characteristics of an animation:
[myLayer setValue:[NSNumber numberWithInt:0] forKeyPath:@"transform.rotation.x"];
Although calayer is very similar to uiview, you can analyze the characteristics of calayer to understand the features of uiview. However, after all, Apple does not use calayer to replace uiview, otherwise, Apple will not design a uiview class. As explained in the official document, the calayer layer tree is equivalent to the cocoa view inheritance tree. It has many things in common with uiview, but core animation does not provide a way to display it in the window. They must host in uiview, and uiview provides them with a response method. Therefore, uireponder is another major feature of uiview.
2. uiresponder inherited by uiview
Uiresponder is the cornerstone of all event responses. The official website also provides an important document for developers to refer to "event handling guide for iOS ".
Events are sent to applications to inform users of their actions. There are three types of events in IOS: multi-touch events, Action events, and remote control events. Three types of events are defined as follows:
typedef enum { UIEventTypeTouches, UIEventTypeMotion, UIEventTypeRemoteControl, } UIEventType;
Let's take a look at the event transfer process in uireponder, as shown in:
Figure 3
The first is the response time processing function of the clicked view. If there is no response function, it is passed up step by step until there is a response processing function or the message is discarded. As for how Apple makes event messages flow like this, we can understand some details in the following analysis, and further explore the deep principles.
Here we will focus on the multi-touch event among the three events, that is, the uitouch event, which is the uitouch content encapsulated in the uievent.
In the touch RESPONSE event of uiview, there is a method that is often confusing: hittest: withevent. Let's take a look at the official explanation: This method traverses the view hierarchy by sending the pointinside: withevent: Message
To each subview to determine which subview shold receive a touch event.
If pointinside: withevent: returns yes, then the subview's hierarchy is traversed;
Otherwise, its branch of the view hierarchy is ignored.
You rarely need to call this method yourself,
But you might override it to hide touch events from subviews. (send the pointinside: withevent: Message to each subview. This method traverses the view layer tree to determine which view should respond to this event. If pointinside: withevent: Yes is returned, the inheritance tree of the Child view is traversed. Otherwise, the inheritance tree of the view is ignored. You rarely need to call this method. You only need to reload this method to hide subview events ). From the official API explanation, we can see that in the hittest method, you must first call pointinside: withevent: To see if you want to traverse the subview. If we don't want a view to respond to an event, we just need to overload pointinside: withevent: Method to let this method return no. However, we still cannot understand the purpose of the hittest: withevent method.
Next, find the answer from "event handling guide for iOS". Your custom responder can use hit-testing to find the subview or sublayer of itself that is "under" a touch, and then handle the event appropriately. It can be seen that the main purpose of hittest is to find that view is touched. We can see that the call process of hittest is completely unknown. We can build a project for debugging. Create a myview to reload the hittest and pointinside methods:
- (UIView*)hitTest:(CGPoint)point withEvent:(UIEvent *)event{[super hitTest:point withEvent:event];return self;}- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{ NSLog(@"view pointInside"); return YES;}
Then add a sub-view in myview. mysecondview also reloads the two methods.
- (UIView*)hitTest:(CGPoint)point withEvent:(UIEvent *)event{[super hitTest:point withEvent:event];return self;}- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{ NSLog(@"second view pointInside"); return YES;}
Note that [Super hittest: Point withevent: event]; must be included; otherwise, hittest cannot call the method of the parent class, so that pointinside: withevent cannot be used for judgment, then we cannot traverse the sub-view. When this statement is removed, the touch event cannot be included in the sub-view, unless you directly return the sub-view object in the method. In this way, you will find that every time you click a view, you will first enter the hittest method in the parent view of this view, then, after calling the hittest method of super, it will find whether pointinside returns yes. If yes, it will pass the message to the subview for processing. The subview uses the same method to recursively find its own subview. Therefore, from the debugging analysis here, we can see that the hittest method is a recursive call method.
This is just to say that the part in the debugging is consistent with what is explained in the official document, but there is another problem, that is, the hittest in each view always calls three, this found the API and a lot of information did not find a solution, and then Google found the following explanation in overflowstack: there are indeed 3 callto hittest. it is not clear why,
But we can surmise by the timestamps on the event
That the first two callare to do with completing the previous gesture-
Those timestamps are always very close to whenever the previous touch happened,
And will be some distance from the current time. (There are indeed three calls to hittest. I don't know why, but the timestamps attribute in the uievent in the first two calls is related to the previous completed gesture. These timestamps times are so close that no matter when the previous touch occurs and there is a certain interval with the current time of the system ). As I can see, the "event handling guide for iOS" explains how to distinguish between click and double-click. The Code is as follows:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *aTouch = [touches anyObject]; if (aTouch.tapCount == 2) { [NSObject cancelPreviousPerformRequestsWithTarget:self]; } } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *theTouch = [touches anyObject]; if (theTouch.tapCount == 1) { NSDictionary *touchLoc = [NSDictionary dictionaryWithObject: [NSValue valueWithCGPoint:[theTouch locationInView:self]] forKey:@"location"]; [self performSelector:@selector(handleSingleTap:) withObject:touchLoc afterDelay:0.3]; } else if (theTouch.tapCount == 2) { // Double-tap: increase image size by 10%" CGRect myFrame = self.frame; myFrame.size.width += self.frame.size.width * 0.1; myFrame.size.height += self.frame.size.height * 0.1; myFrame.origin.x -= (self.frame.origin.x * 0.1) / 2.0; myFrame.origin.y -= (self.frame.origin.y * 0.1) / 2.0; [UIView beginAnimations:nil context:NULL]; [self setFrame:myFrame]; [UIView commitAnimations]; } } - (void)handleSingleTap:(NSDictionary *)touches { // Single-tap: decrease image size by 10%" CGRect myFrame = self.frame; myFrame.size.width -= self.frame.size.width * 0.1; myFrame.size.height -= self.frame.size.height * 0.1; myFrame.origin.x += (self.frame.origin.x * 0.1) / 2.0; myFrame.origin.y += (self.frame.origin.y * 0.1) / 2.0; [UIView beginAnimations:nil context:NULL]; [self setFrame:myFrame]; [UIView commitAnimations]; }
Therefore, the idea of distinguishing these two gestures is to determine that if tapcount finds that when touchend is 2, it will cancel the first operation. But have you ever thought about how Apple judges tapcount? For example, if I press on the screen and release it after a minute, is the touch event captured in the touchend method the same as clicking the screen? The answer is different. You can write a program to test the following and press it for one minute to release it. There is no need for a minute, and it will take a few seconds, in touchend, you will find that the tapcount is 0, and the click to release the tapcount is 1. Another case is double-click. If the double-click interval exceeds 4 or 5 seconds, we can detect the tapcount in touchend again and find that it is 1, while the normal double-click tapcount is 2. Here, hittest is executed three times, and the first two records are recorded at the time of the last touch gesture, and the last one is the time of the touch gesture. It does not matter. There is no official explanation, we can only speculate here. Is it used to distinguish the above situation, that is, to change the number of tapcount in uitouch Based on the event timestamp, and hope that expert can explain it. As a result, we can see why the above-mentioned uievent flows so far as Apple officially releases it.
Finally, we recommend two books for iPhone development, which are very suitable for beginners. One is the iPhone development secret, and the other is core. animationsimplified. My space has the resources of these two books, but they are all original English, interested can read (http://download.csdn.net/source/3347531,http://download.csdn.net/user/mengtnt)