Android app UI hardware accelerated rendering technology brief introduction and Learning Plan

Source: Internet
Author: User
Tags skia

The fluency of the Android system has been compared to iOS and is considered inferior to the latter. This aspect is related to the uneven quality of the Android device, on the other hand, and the implementation of the Android system. For example, before 3.0, the Android application UI drawing does not support hardware acceleration. However, starting from 4.0, the Android system has been optimized for the UI with "run fast, smooth, and responsively". This article gives a brief introduction to these optimizations and develops a learning plan.

Lao Luo's Sina Weibo: Http://weibo.com/shengyangluo, welcome attention!

Note that above we say that Android does not support hardware-accelerated UI drawing, for Android app 2D UI drawing. For 3D UIs, such as games, hardware-accelerated rendering is always supported. In addition, the relationship overview and Learning Plan from the front Android app with the Surfaceflinger service, Android system surface Mechanism's Surfaceflinger service brief introduction and Learning Plan and Android application window (Activity) Implementation Framework brief introduction and Learning Plan This three series of articles can be known, The UI of the Android system is performed in two steps from drawing to displaying to the screen: the first step is on the side of the Android application process, and the second step is on the side of the Surfaceflinger process. The previous step draws the UI into a graphics buffer and gives the drawing buffer to the next step for compositing and display on the screen. The latter step of the UI composition has always been done in hardware-accelerated manner.

Before supporting the Android application UI hardware accelerated rendering, the Android application UI was drawn in software, in order to better understand the Android application UI hardware-accelerated rendering technology, Let's review the software rendering techniques mentioned in this series of articles on the Android application window (Activity) Implementation Framework brief introduction and Learning Plan, 1:


Figure 1 Android Application UI software rendering process

On this side of the Android app process, each window is associated with a surface. Whenever a window needs to draw the UI, it calls its associated surface's member function lock to get a canvas, essentially dequeue a graphic Buffer to the Surfaceflinger service. The canvas encapsulates the 2D UI drawing interface provided by Skia and is drawn on top of the graphic buffer previously obtained. After drawing is completed, the Android application process then invokes the previously obtained member function of the canvas unlockandpost the request display is displayed in the screen, essentially to the Surfaceflinger service queue a graphic Buffer, So that the Surfaceflinger service can synthesize the contents of graphic buffer and display it to the screen.

Let's look at the Android application UI hardware accelerated rendering technology, 2:


Figure 2 Android Application UI hardware accelerated rendering process

Here we first want to clarify what is hardware accelerated rendering, in fact, through the GPU to render. GPU as a hardware, user space is not directly used, it is used by the GPU vendors in accordance with the Open GL Specification for the implementation of the drive indirectly. That is, if a device supports GPU hardware-accelerated rendering, the Android application's UI is rendered by hardware-accelerated technology when the Android application calls the Open GL interface to draw the UI. So, in the next description, when we refer to GPU, hardware acceleration, and open GL, the meaning of their expression is equivalent.

As you can see from Figure 2, hardware-accelerated rendering is like software rendering, and before you start rendering, you first dequeue a graphic Buffer to the Surfaceflinger service. However, for hardware-accelerated rendering, this graphic buffer is encapsulated as a anativewindow and passed to open GL for hardware-accelerated rendering environment initialization. In Android, Anativewindow and surface can be considered equivalent, except that Anativewindow are commonly used in native layers, and surface is often used in the Java layer. In addition, we can think of anativewindow and surface as a bridge between a graphical rendering library such as Skia and open GL and a graphical system underlying the operating system.

Open GL obtains a Anativewindow, and after the hardware accelerated rendering environment initialization work, the Android application can call the open GL provided API for UI drawing, the drawing content is saved in the previously obtained graphic In buffer. When the drawing is complete, the Android application then calls the LIBEGL Library to provide a Eglswapbuffer interface request to display the drawn UI to the screen, which is essentially the same as the software rendering process, is to Surfaceflinger service queue a graphic buffer so that the Surfaceflinger service can synthesize the contents of graphic buffer and display it to the screen.

For the hardware-accelerated rendering of the Android application UI involves a simplified version of the Open GL environment initialization and drawing, which can be referenced in the previous Android system boot screen display process analysis article mentioned in the Android system boot animation implementation. In the Android system's splash screen display process Analysis This article, the boot animation is actually implemented by a/system/bin/bootanimation program. This program can be seen as a native application that has not been developed using the Android SDK.

In this series of articles, we will analyze the hardware-accelerated rendering technology of the Android application UI using the Android 5.0 source code. But to better understand the hardware-accelerated rendering implementation of Android 5.0, it's important to understand the evolutionary history of the Android app UI hardware accelerated rendering since Android 3.0:

1. Android 3.0, which is the honeycomb version, starts referencing the Openglrenderer graphics rendering library, enabling the Android application UI to optionally use hardware-accelerated rendering.

2. Android 4.0, the Ice Cream sandwich version, requires the device to support Android application UI hardware accelerated rendering by default, and adds a Textureview control that directly supports drawing the UI in the form of an open GL texture.

3. Android 4.1, 4.2 and 4.3, which is the jelly Bean version, added the features of project Butter (butter plan), including: A. The VSync signal is used to synchronize the UI drawing and animation so that they can obtain a fixed frame rate of 60fps; B. Three-buffer support to improve GPU and CPU draw rhythm inconsistencies between the problem; The user input, such as touch event, is synchronized to the next VSync signal arrival and then processed; D. Predict the user's touch behavior for better interactive response; E. CPU Input Boost is performed each time the user touches the screen to reduce processing latency.

4. Android 4.4, also known as the KitKat version, improves application efficiency by optimizing memory usage and optionally supporting the use of the art runtime to replace Dalvik virtual machines, making their UI smoother.

5. Android 5.0, the lollipop version, the Art runtime introduced the compacting GC, which further optimizes memory usage for Android applications, and the art runtime formally replaces the Dalvik virtual machine, while The Android app adds a render Thread that is specifically responsible for UI rendering and animation display.

From the evolutionary history of the Android app UI hardware accelerated rendering, it is true that the Android system is actually practicing the grand plan of "run fast, smooth, and responsively".

With the basics in front of you, we'll come back to how Android 5.0 windows and animations are rendered by hardware acceleration technology, as shown in 3:


Figure 3 Hardware-accelerated rendering framework for Android application windows and animations

In the Android application window, each view is abstracted as a render node, and if a view is set with background, the background is also abstracted as a render node. This is because in the Openglrenderer library, there is no concept of view, all the elements that can be drawn are abstracted as a render Node.

Each render node is associated with a display List Renderer. Here comes another concept--display List. Note that the display list is not the display list in open GL, but they are conceptually similar. The Display list is a draw command buffer. That is, when the view's member function OnDraw is called, we call the DRAWXXX member function of the canvas passed in by the parameter to draw the graph, we actually just save the corresponding drawing command and parameters in a display list. The display list renderer is then executed with the display List command, which is called the display list Replay.

What are the benefits of introducing the concept of display list? There are two main benefits. The first benefit is that in the next frame drawing, if the content of a view does not need to be updated, then it is not necessary to rebuild its display List, which is the OnDraw member function that does not need to call it. The second benefit is that in the next frame, if a view is simply a change of a simple property, such as a change in position and alpha value, there is no need to rebuild its display list, just modify the corresponding property in the last display list that was created. This also means that you do not need to call its OnDraw member function. These two benefits are used when drawing a frame of the application window, eliminating the execution of many application code, which greatly saves CPU execution time.

Note that only the view with hardware-accelerated rendering will be associated with the render Node before it is used to the display List. We know that not all 2D UI drawing commands are currently supported by the GPU. This can be specified in the official documentation: Http://developer.android.com/guide/topics/graphics/hardware-accel.html. For a view that uses a 2D UI drawing command that is not supported by the GPU, it can only be rendered by means of software. The specific practice is to create a new canvas, the bottom of which is a bitmap, that is, the drawing takes place on this bitmap. After the drawing is completed, the bitmap is then recorded in the display list of its parent view. When the command of the display list of the parent view is executed, the bitmap recorded in it is then drawn by the Open GL command.

On the other hand, for the aforementioned Textureview introduced in Android 4.0, it is not drawn through the display list. Because its underlying implementation is directly an open GL texture, you can skip the middle tier of the display list to improve efficiency. The drawing of this open GL texture is encapsulated by a layer renderer. The Layer renderer and display List renderer can be thought of as the same level of concept, and they all draw UI elements through the Open GL command. But the former operates on the open GL texture, and the latter operates the display List.

We know that the view of the Android application window is organized in a tree-shaped structure. These view either hardware-accelerated rendering or software rendering, or a special textureview that, during the invocation of their member functions OnDraw, are drawing their own UI in the display list of the parent view. The topmost parent view is a root view, and its associated root node is called root Render node. In other words, the display list of the final root Render node will contain all the drawing commands for a window. When the next frame of the window is drawn, the display list of the Root Render node is actually drawn in a graphic buffer through an open GL renderer through the Open GL command. Finally this graphic buffer is handed over to the Surfaceflinger service for compositing and display.

The application UI drawing mechanism analyzed above has not yet involved animations. When a view needs to be animated, we can call this view's member function animate to get a viewpropertyanimator. Viewpropertyanimator, like view, is also abstracted as a render Node. However, this render node is handled differently than the view's render node, which is registered to the render thread of the Android application and then executed by the render thread for the animation it contains. Until the end of the animation. This does not require the main thread of the Android application to process the animation, so that the main thread of the Android application can handle user input more intently, thus using the Android application UI to be more responsive.

Further, if we call the Viewpropertyanimator member function Withlayer, then the animation of the target view can be optimized in one step. Recalling the characteristics of Textureview, it is directly through the open GL texture to draw, this can save the display list this intermediate step. Similarly, when we call the Viewpropertyanimator member function Withlayer, the layer TYPE of the target view is temporarily modified to layer_type_hardware. For view with layer TYPE Layer_type_hardware, it will be implemented directly through the Open GL Frame Buffer Object (FBO), which can also improve rendering efficiency. When the animation finishes, the layer type of the target view reverts to the original set type.

The above is the hardware-accelerated rendering framework for Android application windows and animations, and the render thread mentioned here needs further explanation. The Render thread was introduced in Android 5.0, and it was used to share the work of the main thread of the Android application. Before Android 5.0, the main thread of the Android application was not only responsible for rendering the UI, but also for handling user input. By introducing the render thread, we can release the main thread from the UI rendering work and handle it with the render thread, which makes the main thread more focused and efficient in handling user input. This makes the UI more responsive while improving the efficiency of the UI drawing.

The Main thread interacts with the render thread as shown in Model 4:

Figure 4 The interactive model of the Android application main thread and the render thread

The main thread is primarily responsible for invoking the view's member function OnDraw to construct their display List, and then, when the next VSync signal arrives, through a Redner proxy object to the render The thread issues a drawframe command. The render thread has a task queue inside, and the Drawframe command sent from the main thread will be saved in the render thread's task queue, waiting for the render thread to process.

For animated displays, theMain thread interacts with the render thread as shown in Model 5:

Figure 5 Android Application main thread vs. render thread animation interaction model

In the Java layer, the animation that is implemented by the render node is abstracted as a render node Animator. This render node animator registers a render node representing the animation in the render thread, which is implemented by attaching the render node to the root render node of the Android application window. The render thread internally then encapsulates the render node as a animator handle object, and is responsible for performing the animation it describes until the end of the animation.

At this point, the key concepts involved in hardware accelerated rendering of the Android application UI have been introduced, and we will follow the following four scenarios to further analyze its implementation:

1. The Android application UI hardware accelerated rendering environment initialization process analysis;

2. Android Application UI hardware accelerated rendering of the display list build process analysis;

3. Android application UI hardware accelerated rendering of Display list replay analysis;

4. Android Application UI hardware accelerated rendering of animation execution process analysis.

With these four scenarios, we can take a deep look at the hardware-accelerated rendering technology of the Android app UI. More information can also focus on Lao Luo's Sina Weibo: Http://weibo.com/shengyangluo.

Android app UI hardware accelerated rendering technology brief introduction and Learning Plan

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.