Off-screen rendering learning notes

Source: Internet
Author: User
Tags uikit

I. Conceptual understanding

In OpenGL, GPU screen rendering is available in the following two ways:

    • On-Screen Rendering

means the current screen rendering, which refers to the rendering operation of the GPU in the screen buffer currently used for display.

    • Off-screen Rendering

Off-screen rendering means that the GPU creates a new buffer outside the current screen buffer for rendering operations.

Figure 1-1

In general, the CPU, GPU, and monitor in a computer system work together in this way. The CPU calculates the display content submitted to the GPU,GPU render results into the frame buffer after rendering, then the video controller will follow the VSync signal as shown in 1-4, read the data of the frame buffer row by line, passing through the possible digital-to-analog conversion to display the display.

Second, off-screen rendering trigger mode

When the following properties are set, off-screen drawing is triggered:

    • Shouldrasterize (rasterization)
    • Masks (matte)
    • Shadows (Shadow)
    • Edge antialiasing (anti-aliasing)
    • Group Opacity (opaque)

It is important to note that if Shouldrasterize is set to Yes, the rasterized content will be cached when the off-screen drawing is triggered, and if the corresponding layer and its sublayers are not changed, the next frame can be reused directly. This will improve rendering performance to a large extent.

If the other properties are turned on, there will be no cache, and the off-screen drawing will occur at each frame.

Compared to the current screen rendering, the cost of off-screen rendering is very high, mainly reflected in two aspects:

1. Create a new buffer

To perform off-screen rendering, you first create a new buffer.

2. Context Switches

The entire process of off-screen rendering requires multiple switching of the context: first switching from the current screen (on-screen) to off-screen (off-screen), and when the off-screen rendering is finished, the rendering of the off-screen buffer is displayed on the screen with the need to switch the context from off-screen to the current screen. And the context of the switch is to pay a great price.

After the VSync signal arrives, the system graphics Service notifies the App,app main thread to start computing the display content in the CPU, such as view creation, layout calculation, picture decoding, text drawing, etc. through Cadisplaylink mechanism. The CPU then submits the computed content to the GPU, which is transformed, synthesized, and rendered by the GPU. The GPU then submits the rendered result to the frame buffer, waiting for the next VSync signal to appear on the screen. Due to the mechanism of vertical synchronization, if the CPU or GPU does not complete the content submission within a VSync time, the frame will be discarded, waiting for the next opportunity to display again, and the display will remain unchanged. This is the reason for the interface lag. As you can see, the CPU and GPU, regardless of which block the display process, will cause a drop frame phenomenon. Therefore, it is also necessary to evaluate and optimize the CPU and GPU pressure separately during development.

The IOS display system is driven by the VSync signal, and the VSync signal is generated by the hardware clock, which is emitted 60 times per second (this value depends on the device hardware, such as the iPhone is usually 59.97 on a real machine). After the IOS graphics service receives the VSYNC signal, it is notified via IPC to the App. The app's runloop will register the corresponding Cfrunloopsource through Mach_port to receive the clock signal notification, then the Source callback will drive the entire app animation and display.

Core Animation registered a Observer in runloop and monitored the beforewaiting and Exit events. When a touch event arrives, the Runloop is awakened, and the code in the APP performs some actions, such as creating and resizing the view hierarchy, setting the frame of the UIView, modifying the transparency of the Calayer, adding an animation to the view, which will eventually be Calayer tagged and passed Catransaction submits to an intermediate state to go. When all of the above operations are completed, Runloop is about to enter hibernation (or exit), the Observer that is concerned about the event will be notified. At this point the Core Animation registered Observer will be in the callback, all the intermediate state is submitted to the GPU to display, if there is animation, through the DisplayLink stable refresh mechanism will continue to wake up Runloop, This makes it possible for the observer callback to be triggered continuously, thus updating the property value of the animation and drawing it according to the time.

In order not to block the main thread, core Animation is an abstraction of OpenGL ES, so most of the rendering is directly submitted to the GPU for processing. Most of the drawing operations of the core Graphics/quartz 2D are done synchronously on the main thread and CPU, such as in the custom UIView drawrect with Cgcontext to draw.

Asyncdisplay Introduction

The main task of the blocking main thread is the three major categories: Layout calculation view layouts text width, rendering text rendering picture decoding picture drawing, Uikit object creation update release. In addition to the Uikit and coreanimation related operations must be done in the main thread, others can be moved to the background thread asynchronous execution.

Asyncdisplay creates the Asdisplaynode class through an abstract UIView relationship, Asdisplaynode is thread-safe and can be created and modified in a background thread. When Node was first created, it does not create a new UIView and Calayer internally, until the first time the view or Layer property is accessed on the main thread, it generates the corresponding object internally. When its properties (such as Frame/transform) are changed, it does not immediately synchronize to the view or layer it holds, but instead saves the changed attribute to an internal intermediate variable, which is then set to the internal view or layer at a time, if necessary, by a mechanism. This allows asynchronous concurrency operations to be implemented.

The Asyncdisplay implementation relies on triggering as the core animation registers observer events in Runloop.
Also attached is a good article on Asyncdisplay, "iOS keep the interface fluent and Asyncdisplay introduction"

Off-screen rendering learning notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.