IOS development diary 54-event processing mechanism and image rendering process, ios54-

Source: Internet
Author: User
Tags call back

IOS development diary 54-event processing mechanism and image rendering process, ios54-

Today, the blogger has an event processing mechanism and image rendering process requirement, and has encountered some difficulties. I would like to share with you the hope to make common progress.

What did iOS RunLoop do?

RunLoop is a loop that receives and processes asynchronous message events. In a loop, wait for the event to occur and send it to a place where it can be processed.

1-1 describes a simple process in which a touch event is transmitted from the operating system layer to the main runloop in the application.

Figure 1-1

To put it simply, RunLoop is an event-driven large loop, as shown in the following code:

Int main (int argc, char * argv []) {// the running state of the program while (AppIsRunning) {// sleep state, waiting for the wake-up event id whoWakesMe = SleepForWakingU p (); // get the wake-up event id event = GetEvent (whoWakesMe); // start processing event HandleEvent (event);} return 0 ;}

RunLoop mainly handles the following six types of events:

static void __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__();static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__();static void __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__();static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__();static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__();static void __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__();

Pseudo Code of RunLoop execution sequence

Wait (); // by GCD timer // the notification is about to enter runloop _ Wait _ (KCFRunLoopEntry); do {_ CFRunLoopDoObservers (wait ); _ CFRunLoopDoBlocks (); // a loop is called twice to ensure that non-delayed NSObject implements mselector calls and non-delayed dispatch_after calls are executed in the current runloop. There is also the callback block _ CFRunLoopDoSource0 (); // For example, The UIEvent event handler () processed by UIKit; // GCD dispatch main queue _ CFRunLoopDoObservers, the interface var wakeUpPort = SleepAndWaitForWakingUpPorts (); // mach_msg_trap will be re-painted, and will be stuck in the kernel waiting for matching kernel mach_msg event // Zzz... // Received mach_msg, wake up _ CFRunLoopDoObservers (kCFRunLoopAfterWaiting); // Handle msgs if (wakeUpPort = t ImerPort) {_ CFRunLoopDoTimers ();} else if (wakeUpPort = mainDispatchQueuePort) {// GCD when calling dispatch_async (dispatch_get_main_queue (), block, libDispatch will send the mach_msg message to the main thread's runloop to wake up the runloop and execute it here. This is only applicable to tasks where dispatch is executed to the main thread, while libDispatch is used to process dispatch to other threads. _ CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE _ ()} else {_ CFRunLoopDoSource1 (); // is CADisplayLink triggered by mach_msg of source1?} _ CFRunLoopDoBlocks ();} while (! Stop &&! Timeout); // notification observers, about to exit runloop _ CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBERVER_CALLBACK_FUNCTION _ (CFRunLoopExit );

Based on the execution sequence of the above Runloop event, consider why the following code logic can identify whether tableview reload is complete.

Dispatch_async (dispatch_get_main_queue (), ^ {_ isReloadDone = NO; [tableView reload]; // The tableView layoutIfNeeded is set to YES automatically, this means that the table dispatch_async (dispatch_get_main_queue (), ^{_ isReloadDone = YES ;}) will be re-painted at the end of the runloop ;});});

Tip: two tasks are inserted in the GCD dispatch main queue. One RunLoop has two opportunities to execute the tasks in the GCD dispatch main queue before and after hibernation.

Why does iOS have to operate the UI in the main thread?

Because UIKit is NOT thread-safe. Consider the following situations:

After iOS4, Apple changed most of the plotting methods and classes such as UIColor and UIFont to ensure thread safety and availability. However, it is strongly recommended that UI operations be performed in the main thread.

Event Response

Apple registered a Source1 (based on mach port) to receive system events. Its callback function is _ IOHIDEventSystemClientQueueCallback ().

When a hardware event (such as touch/screen lock/Shake) occurs, IOKit. framework generates an IOHIDEvent event and receives it from SpringBoard.

SpringBoard only receives buttons (lock screen/mute), touch, acceleration, proximity sensors and other events, and then forwards them to the desired App process using the mach port. Then the Source1 registered by Apple will trigger the callback and call _ UIApplicationHandleEventQueue () for internal application distribution.

_ UIApplicationHandleEventQueue () processes IOHIDEvent and packs it into a UIEvent for processing or distribution, including identifying UIGesture, processing screen rotation, and sending it to UIWindow. Generally, events such as UIButton click and touchesBegin/Move/End/Cancel are completed in this callback.

CALayer

In iOS, all views are derived from a base class called UIVIew. UIView can process touch events and supports Drawing Based on Core Graphics, it can be used for affine transformations (such as rotation or scaling), or a simple animation similar to sliding or gradient.

The CALayer class is similar in concept to the UIView. It is also a rectangle block managed by the hierarchical relationship tree. It can also contain some content (such as images, text, or background colors ), manage the positions of child layers. They have some methods and attributes for animation and transformation. The biggest difference from UIView is that CALayer does not process user interaction. CALayer does not know the specific response chain.

UIView and CALayer are a parallel hierarchical relationship. Each UIView has a layer attribute of a CALayer instance, that is, the so-called backing layer. The view is responsible for creating and managing this layer, to ensure that when a child view is added or removed from a hierarchy, the associated layers also correspond to the same operations in the hierarchy tree. In fact, these Layer layers are actually used for display and animation on the screen. UIView is only an encapsulation of IT and provides some iOS functions similar to touch processing, and advanced interfaces of Core Animation underlying methods.

In the system, the Layer of UIView is maintained with three identical tree data structures:

CADisplayLink and NSTimer

Nstoptiis actually CFRunLoopTimerRef. After an NSTimer is registered with RunLoop, RunLoop registers events for its repeated time points.

To save resources, RunLoop does not call back the Timer at a very accurate time point. Timer has a property called Tolerance, which indicates the maximum error allowed after the time point. If a time point is missed, for example, if a long task is executed, the callback at that time point will also jump over without delay.

RunLoop is a Timer implemented using dispatch_source_t of GCD. After calling NSObject's javasmselecter: afterDelay:, an Internal Timer is actually created and added to the RunLoop of the current thread. Therefore, if the current thread does not have a RunLoop, this method will become invalid. When performSelector: onThread: is called, in fact, it will create a Timer and add it to the corresponding thread. Similarly, if the corresponding thread does not have RunLoop, this method will also become invalid.

CADisplayLink is a timer that is consistent with the screen update rate (refresh 60 times per second) (but the actual implementation principle is more complicated. Unlike NSTimer, CADisplayLink actually operates a Source internally ). If a long task is executed between two screen refreshes, one frame will jump over, resulting in a choppy interface.

IOS rendering process

Figure 2-1

In general, CPUs, GPUs, and displays in computer systems work collaboratively in the above way. After CPU computing, the display content is submitted to the GPU. After GPU rendering is complete, the rendering result is placed into the frame buffer. Then, the video controller reads the data in the frame buffer row by row according to the VSync signal, as shown in figure 1-4, possible digital-analog conversions are passed to the display.

Figure 2-2

After the VSync signal arrives, the system graphics service will notify the App through the CADisplayLink mechanism, and the main App thread starts to calculate the display content in the CPU, for example, view creation, layout calculation, image decoding, and text drawing. The CPU then submits the computed content to the GPU for conversion, synthesis, and rendering. The GPU then submits the rendering result to the frame buffer, waiting for the next VSync signal to be displayed on the screen. Due to the vertical synchronization mechanism, if the CPU or GPU does not complete content submission during a VSync time, the frame will be discarded and will be displayed again after the next opportunity, at this time, the display will keep the previous content unchanged. This is why the interface is stuck. As you can see, the CPU and GPU will cause frame drop regardless of which hinders the display process. Therefore, the CPU and GPU stress must be evaluated and optimized separately during development.

The iOS display system is driven by the VSync signal. The VSync signal is generated by the hardware clock and is sent 60 times per second (this value depends on the hardware of the device. For example, the display system on the iPhone is usually 59.97 ). After the iOS graphics Service receives the VSync signal, it notifies the App through IPC. After the App's Runloop is started, it registers the corresponding CFRunLoopSource to receive the sent clock signal notification through mach_port. Then, the Source callback will drive the animation and display of the entire App.

Core Animation registers an Observer in RunLoop and listens to BeforeWaiting and Exit events. When a touch event arrives, the RunLoop is awakened and the code in the App performs some operations, for example, creating and Adjusting view levels, setting the frame of UIView, modifying the transparency of CALayer, and adding an animation to the view are all ultimately marked by CALayer, and submit it to an intermediate state through CATransaction. When all the above operations are completed and RunLoop is about to sleep (or quit), the Observer that follows the event will be notified. At this time, the Observer registered by Core Animation will merge all intermediate states in the callback and submit them to the GPU for display. If there is an Animation here, the stable refresh mechanism of DisplayLink will continuously wake up the runloop, so as to continuously trigger the observer callback, so as to update and draw the animation attribute value based on time.

In order not to block the main thread, the Core of Core Animation is an abstract object of OpenGL ES, so most of the rendering is directly submitted to the GPU for processing. Most of the Core Graphics/Quartz 2D painting operations are completed simultaneously on the main line and CPU. For example, CGContext is used for drawing in the drawRect of the custom UIView.

Rendering time

As mentioned above: Core Animation registers an Observer listener BeforeWaiting (about to enter sleep) and Exit (about to Exit Loop) Events in RunLoop. When you change the Frame, update the UIView/CALayer hierarchy, or manually call the setNeedsLayout/setNeedsDisplay method of UIView/CALayer during UI operations, this UIView/CALayer is marked as pending and submitted to a global container. When the events monitored by Oberver arrive, the callback execution function traverses all the uiviews/CAlayer to execute the actual rendering and adjustment, and updates the UI.

The call stack inside this function is probably like this:

_ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv()    QuartzCore:CA::Transaction::observer_callback:        CA::Transaction::commit();            CA::Context::commit_transaction();                CA::Layer::layout_and_display_if_needed();                    CA::Layer::layout_if_needed();                          [CALayer layoutSublayers];                          [UIView layoutSubviews];                    CA::Layer::display_if_needed();                          [CALayer display];                          [UIView drawRect];

CPU and GPU Rendering

In OpenGL, there are two methods for GPU screen rendering:

Indicates the current screen rendering, which means that the GPU rendering operation is performed in the currently used screen buffer.

2. Off-Screen Rendering

For off-screen rendering, the GPU opens a new buffer outside the current screen buffer for rendering.

In this case, if you call the rendering that is not in the current screen buffer of the GPU as off-screen rendering, there is another special "off-screen rendering" method: CPU rendering. If we overwrite the drawRect method and use any Core Graphics technology to draw, it involves CPU rendering. The entire rendering process is completed synchronously by the CPU in the App, and the rendered bitmap is finally presented to the GPU for display.

Compared with the current screen rendering, the cost of off-screen rendering is very high, mainly reflected in two aspects:

To perform off-screen rendering, you must first create a new buffer.

2. Context switching

During the entire process of Off-Screen rendering, you need to switch the Context Environment Multiple times: switch from the current Screen to Off-Screen. After the Off-Screen rendering is complete, to display the rendering result of the off-screen buffer to the screen, you need to switch the Context Environment from the off-screen to the current screen. However, switching the Context Environment is costly.

When the following attributes are set, the off-screen rendering is triggered:

Note that if shouldRasterize is set to YES, The raster content will be cached when off-screen rendering is triggered. If the corresponding layer and its sublayers are not changed, the next frame can be reused directly. This will greatly improve the rendering performance.

If other attributes are enabled, there will be no cache, and the off-screen rendering will occur at every frame.

During development, you need to select the optimal implementation method based On the actual situation and try to use On-Screen Rendering. For simple Off-Screen Rendering, you can consider using Core Graphics for CPU Rendering.

Core Animation

1. Implicit Animation

Implicit animation is automatically completed by the system framework. Core Animation automatically starts a new transaction in each runloop cycle, even if you do not explicitly start a transaction with [CATransaction begin, any changes to attributes in a runloop loop will be centralized and then an animation of 0.25 seconds will be made.

In iOS4, Apple added a block-based animation method to UIView: + animateWithDuration: animations :.

In this way, writing a bunch of property animations will be easier in syntax, but they are basically doing the same thing.

The + begin and + commit methods of CATransaction are automatically called in + animateWithDuration: animations: internally, so that all attribute changes in the block will be included by the transaction.

Core Animation is usually used to Animation all attributes of CALayer (Animation attributes). But how does UIView disable this attribute of the layer associated with it?

Each UIView plays a delegate to the layer associated with it, and provides the-actionForLayer: forKey implementation method. In the implementation of an animation block, UIView returns nil for all layer actions, but within the animation block range, it returns a non-null value.

@interface ViewController ()@property (nonatomic, weak) IBOutlet UIView *layerView;@end@implementation ViewController- (void)viewDidLoad{    [super viewDidLoad];    //test layer action when outside of animation block    NSLog(@"Outside: %@", [self.layerView actionForLayer:self.layerView.layer forKey:@"backgroundColor"]);    //begin animation block    [UIView beginAnimations:nil context:nil];    //test layer action when inside of animation block    NSLog(@"Inside: %@", [self.layerView actionForLayer:self.layerView.layer forKey:@"backgroundColor"]);    //end animation block    [UIView commitAnimations];}@end$ LayerTest[21215:c07] Outside: $ LayerTest[21215:c07] Inside:

2. Explicit Animation

Core Animation provides an explicit Animation type, which can either directly perform an Animation on the previous attribute or overwrite the default layer behavior.

CABasicAnimation, CAKeyframeAnimation, CATransitionAnimation, and CAAnimationGroup are all explicit animation types that can be submitted directly to CALayer.

Whether it is an implicit animation or an explicit animation, after being submitted to the layer, after a series of processing, the rendering process described above is finally rendered.

Facebook Pop Introduction

In the computer world, there is no absolute continuous animation. The animation on the screen you see is essentially discrete, it's just that the number of discrete frames in one second is continuous when the number of frames reaches a certain level,

In iOS, the maximum frame rate is 60 frames per second. IOS provides the Core Animation framework. Developers only need to provide key frame information. For example, they need to provide the key frame information of the end point of an animatable attribute, and then the intermediate value is calculated through a certain algorithm, in this way, the animation can be completed. CAMediaTimingFunction provides the time curves on which interpolation is performed in Core Aniamtion.

The use of Pop Animation is similar to that of Core Animation, which involves the concepts of the Animation object and the carrier of the Animation. The difference is that the carrier of Core Animation can only be CALayer, pop Animation can be any NSObject-based object. Of course, in most cases, Animation is a visual effect displayed on the interface. Therefore, the Animation execution vector is generally UIView or CALayer directly or indirectly.

However, if you only want to study the change curve of Pop Animation, you can apply it to a common data object. When Pop Animation is applied to CALayer, the attribute values of the layer and its presentationLayer are consistent at any time of the Animation running, but the Core Animation cannot. Pop Animation can apply any NSObject object, and Core Aniamtion must be CALayer.

The following example shows how to customize Pop readBlock and writeBlock to process Custom Animation attributes:

prop = [POPAnimatableProperty propertyWithName:@"com.foo.radio.volume" initializer:^(POPMutableAnimatableProperty *prop) {    // read value    prop.readBlock = ^(id obj, CGFloat values[]) {        values[0] = [obj volume];    };    // write value    prop.writeBlock = ^(id obj, const CGFloat values[]) {        [obj setVolume:values[0]];    };    // dynamics threshold    prop.threshold = 0.01;}];POPSpringAnimation *anim = [POPSpringAnimation animation];anim.property = prop;

The core of Pop implementation dependency is CADisplayLink.

The last article, "Introducing Facebook Pop", describes how to use Facebook Pop.

Introduction to AsyncDisplay

There are three main types of painting tasks blocking the main thread: Layout calculation view Layout text width and height, Rendering text Rendering image decoding image Rendering, and UIKit object creation and release. Except that operations related to UIKit and CoreAnimation must be performed in the main thread, other operations can be moved to the background thread for asynchronous execution.

AsyncDisplay creates the ASDisplayNode class by abstracting the UIView relationship. ASDisplayNode is thread-safe and can be created and modified in the background thread. When a Node is just created, it does not create a UIView and CALayer internally until the first time the main thread accesses the view or layer attribute, it will generate a corresponding object internally. When its attributes (such as frame/transform) change, it will not be synchronized to the view or layer it holds immediately, instead, the changed attribute is saved to an intermediate variable inside. Later, you can use a mechanism to set the variable to the internal view or layer. This allows asynchronous concurrent operations.

AsyncDisplay implementation is triggered by registering an observer event in runloop as if Core Animation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.