Drawing and rendering of iOS interface

Source: Internet
Author: User
Tags uikit

Drawing and rendering of the interface

How the UIView is displayed on the screen.

This is going to start with Runloop, Runloop is a 60fps callback, That is, every 16.7ms to draw a screen, that is, we need to complete the view of the buffer creation, view content of the drawing of these are the work of the CPU, and then the buffer to GPU rendering, which includes multiple view stitching (compositing), texture rendering (Texture) and so on, the last display to the screen. But if you do too much in 16.7ms, causing Cpu,gpu to fail to complete the specified work within the specified time, then there will be the phenomenon of Kaka, that is, dropped frames.

60fps is the optimal frame rate given by Apple, but in practice we can guarantee that the frame rate can be stabilized to 30fps to ensure that there is no lag, 60fps is more used in the game. So if your app can make sure that 33.4ms draws a screen, it's basically not going to get stuck.

In general, UIView from draw to render takes the following steps:

    • Each uiview has a layer, each layer has a content, this content points to a cache, called backing store.

    • UIView drawing and rendering is a two process, and when UIView is drawn, the CPU executes DrawRect and writes the data through the context to the backing store.

    • When the backing store is finished, it is rendered to the GPU through the render server and the bitmap data in the backing store is displayed on the screen.

Is the process from the CPU to the GPU

Pic_5.jpeg

In fact, the CPU is to do the rendering of the operation of the content into the cache, the GPU is responsible for reading the data from the cache and rendering to the screen.

As shown in the

Pic_4.jpeg

The whole process is one thing: The CPU puts the prepared bitmap into RAM, and the GPU moves this fast memory to VRAM.
And this process GPU can withstand the limit of about 16.7ms to complete a frame of processing, so the first mention of 60fps is actually the GPU can handle the highest frequency.

As a result, the GPU has two challenges:

    • Moving data from RAM to VRAM

    • Render the texture to the screen

These two bottlenecks are basically on the 2nd. The rendering texture basically has to deal with such a few problems:

Synthesis (compositing):

Compositing refers to the process of making multiple textures together, corresponding to Uikit, which refers to the case where multiple view is processed together (DrawRect only fires if addsubview)

[Self.view Addsubview:subview]

If there is no overlay between the view, then the GPU only needs to do normal rendering. If there is an overlay between multiple view, the GPU needs to do blending.

Size (size):

This problem, mainly to deal with image, if there is a picture of 400x400 in memory, to put in the 100x100 ImageView, if not to do any processing, directly thrown in, the problem is large, which means that the GPU needs to zoom to a large map to the small area display, Need to do pixel point of sampling, this smapling cost is very high, but also need to take into account pixel alignment. The amount of computation will soar.

Off-screen rendering (offscreen Rendering and Mask):

Let's take a look at the approximate structure of the graphics drawing framework in iOS

Pic_3.jpeg

Uikit is the framework for managing user graphics interactions in iOS, but the Uikit itself is built on the coreanimation framework, and coreanimation is divided into two parts OpenGL es and core Graphics,opengl ES is a direct call to the underlying GPU for rendering; the Core graphics is a CPU-based drawing engine;

What we call hardware acceleration in fact refers to the Opengl,core Animation/uikit based on the GPU on the computer graphics synthesis and drawing implementation, because the CPU is rendering ability is lower than the GPU, so when the CPU draw when the animation will have a noticeable lag.

However, some of these drawings produce off-screen rendering, adding additional GPU and CPU rendering.

In OpenGL, GPU screen rendering is available in the following two ways:

    • On-Screen rendering is the current screen rendering, which refers to the rendering operation of the GPU in the screen buffer currently used for display.

    • Off-screen rendering is off-screen rendering, which refers to a new buffer that the GPU creates in addition to the current screen buffer for rendering operations.

The cost of off-screen rendering includes two main things:

    • Create a new buffer

    • Context switching, the entire process of off-screen rendering, requires multiple switching of the context: first switching from the current screen (on-screen) to off-screen (off-screen), and when the off-screen rendering is complete, The rendering of the off-screen buffer is displayed on the screen with the need to switch the context from off-screen to the current screen. And the context of the switch is to pay a great price.

Why do I need off-screen rendering?

The goal is that when using rounded corners, shadows, and masks, the blend of layer properties is specified as not to be drawn directly to the screen before being pre-synthesized, that is, when the main screen is not yet drawn, it needs to be rendered out of the screen, and then, when the main screen has been drawn, the content of the off-screen is transferred to the

How to trigger off-screen rendering:

    • Shouldrasterize (rasterization)

    • Masks (matte)

    • Shadows (Shadow)

    • Edge antialiasing (anti-aliasing)

    • Group Opacity (opaque)

Some of these property settings can cause problems with off-screen rendering, greatly reducing GPU rendering performance.

CPU Rendering:

All of the above is that off-screen rendering occurs in OpenGL SE, which is the GPU, but the CPU also has special rendering, our CPU rendering, that is, when we use the core graphics, but the point is that only when we rewrite the DrawRect method, and using any of the core graphics techniques for drawing operations involves CPU rendering. The entire rendering process is done synchronously by the CPU in the app, and the rendered bitmap is finally used by the GPU for display.

In theory CPU rendering should not be considered as a standard off-screen rendering, but because the CPU itself rendering performance is not good, so this way also need to be avoided as far as possible.

Analysis

So for screen rendering, off-screen rendering and CPU rendering, when screen rendering is always the best choice, but considering that the GPU has a higher floating-point capability than the CPU, but because off-screen rendering needs to re-open the buffer and screen context switch, Therefore, the performance of off-screen rendering and CPU rendering needs to be based on the actual situation to make a choice.

Summarize

In fact, the first part of the implementation at that time did not consider a number of performance problems, so specific graphics performance optimization, I will be in the next article, is also in our app actually encountered some of the situation and the corresponding solutions.

Drawing and rendering of iOS interface

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.