Overview and learning plan of hardware accelerated rendering technology for Android application UI
The smoothness of the Android system has always been compared with that of iOS, and is considered inferior to that of the latter. This is related to the hardware quality of Android devices and the implementation of Android systems. For example, Android app UI rendering before 3.0 does not support hardware acceleration. However, since 4.0, the Android system has been optimizing the UI with the goal of "run fast, smooth, and responsively. This article briefly introduces these optimizations and develops learning plans.
Note: The Android system does not support hardware-accelerated UI rendering. It targets Android applications with 2D UI rendering. 3D UIS, such as games, have always supported hardware accelerated rendering. In addition, a brief introduction to the SurfaceFlinger service of the Surface mechanism of the Android system and the learning plan and the Android Application Window (Activity) are provided from the relationship overview and learning plan of the previous Android Application and SurfaceFlinger service) the introduction to the implementation framework and the learning plan are described in the following articles: the first step is on the Android Application Process side, and the second step is on the SurfaceFlinger process side. The previous step draws a graphic buffer area for the UI and submits the graphic buffer area to the next step for merging and display on the screen. The UI merging in the next step has always been completed in hardware acceleration mode.
Before supporting accelerated rendering of Android app UI hardware, Android app UI is drawn in the form of software. In order to better understand the Android app UI hardware acceleration rendering technology, let's first review the software rendering technology mentioned in the series of articles about implementing the Framework in the Android Application Window (Activity), as shown in 1:
Figure 1 rendering process of Android app UI Software
On the Android Application Process side, each window is associated with a Surface. Whenever a window needs to draw a UI, it will call its associated Surface member function lock to obtain a Canvas, which is essentially a Graphic Buffer to the SurfaceFlinger service Dequeue. Canvas encapsulates the 2D UI rendering interface provided by Skia and is drawn on the previously obtained Graphic Buffer. After the rendering is complete, the Android application process then calls the unlockAndPost request of the Canvas member function obtained above to display it on the screen. In essence, it is a Graphic Buffer to the SurfaceFlinger service Queue, this allows the SurfaceFlinger service to synthesize the Graphic Buffer content and display it on the screen.
Next, let's take a look at the hardware accelerated rendering technology of the Android app UI, as shown in Figure 2:
Figure 2 Android app UI hardware acceleration rendering process
Here, we need to first clarify what is hardware accelerated rendering, which is actually rendering through GPU. As a hardware, GPU cannot be directly used in user space. It is indirectly used by GPU vendors according to Open GL specifications. That is to say, if a device supports GPU hardware accelerated rendering, when the Android Application calls the Open GL interface to draw the UI, the UI of the Android Application is rendered by hardware acceleration technology. Therefore, when we mention GPU, hardware acceleration, and Open GL in the following description, they all mean equivalent.
As shown in figure 2, hardware accelerated rendering is the same as software rendering. Before rendering, a Graphic Buffer is first sent to the SurfaceFlinger service Dequeue. However, for hardware accelerated rendering, this Graphic Buffer will be encapsulated into an ANativeWindow and passed to Open GL for hardware accelerated rendering environment initialization. In the Android system, ANativeWindow and Surface can be considered equivalent, but ANativeWindow is often used in the Native layer, while Surface is often used in the Java layer. In addition, we can regard ANativeWindow and Surface as a bridge between the graphic rendering library like Skia AND Open GL and the underlying Graphic System of the operating system.
After Open GL obtains an ANativeWindow and initializes the hardware accelerated rendering environment, the Android application can call the API provided by Open GL to draw the UI, the drawn content is saved in the previously obtained Graphic Buffer. After the rendering is complete, the Android Application calls an eglSwapBuffer interface provided by the libegl library to display the drawn UI to the screen. Essentially, the rendering process is the same as that of the software, it is a Graphic Buffer to the SurfaceFlinger service Queue, so that the SurfaceFlinger service can synthesize the Graphic Buffer content and display it to the screen.
The hardware accelerated rendering process of the Android application UI involves the simplified version of the Open GL environment initialization and drawing, you can refer to the Android system boot animation implementation mentioned in the previous Android system boot screen display process analysis article. In this article, the boot animation is actually implemented by A/system/bin/bootanimation program. This program can be viewed as a Native application that is not developed using the Android SDK.
In this series of articles, we will use the Android 5.0 source code to analyze the hardware accelerated rendering technology of the Android application UI. However, to better understand the hardware accelerated rendering implementation of Android 5.0, we need to first understand the evolution history of hardware accelerated rendering of Android app UI since Android 3.0:
1. Android 3.0, that is, the Honeycomb version, starts to reference the OpenGLRenderer graphic rendering library and supports hardware accelerated rendering in the Android application UI.
2. android 4.0, that is, the Ice Cream Sandwich version, requires that the device support Android application UI hardware accelerated rendering by default, and adds a TextureView control, this control allows you to directly draw the UI in the form of Open GL textures.
3. android 4.1, 4.2, and 4.3, that is, the Jelly Bean version, added the features of the Project Butter (Butter Program), including:. use Vsync signals to synchronize UI rendering and animation so that they can obtain a Fixed Frame Rate of 60 FPS; B. three buffers are supported to improve the inconsistent draw speed between the GPU and the CPU. C. user input, such as touch event, is synchronized to the next Vsync signal and then processed; D. predict users' touch behavior to get better interaction response. E. each time you touch the screen, perform CPU Input Boost to reduce processing latency.
4. android 4.4, also known as KitKat, improves the running efficiency of applications by optimizing the memory usage and optional replacement of the Dalvik Virtual Machine During the ART runtime, make the UI smoother.
5. android 5.0, or Lollipop version, introduces Compacting GC during the ART runtime, further optimizes the memory usage of the Android application, and officially replaces the Dalvik Virtual Machine During the ART runtime, the Android Application adds a Render Thread, which is responsible for UI rendering and animation display.
From the evolution history of Android application UI hardware accelerated rendering, we can see that the Android system is indeed implementing the grand plan of run fast, smooth, and responsively, and it is also done.
With the basic knowledge above, we will introduce how the windows and animations of Android 5.0 are rendered through hardware acceleration technology, as shown in Figure 3:
Figure 3 hardware accelerated rendering framework for Android Application windows and animations
In the Android Application Window, each View is abstracted as a Render Node, and if a View is set with a Background, this Background is also abstracted as a Render Node. This is because there is no View concept in the OpenGLRenderer library, and all the elements that can be drawn are abstracted as a Render Node.
Each Render Node is associated with a Display List Renderer. Another concept -- Display List is involved here. Note: This Display List is not the Display List in Open GL, but they are similar in concept. Display List is a buffer for drawing commands. That is to say, when the onDraw function of the View is called, when we call the drawXXX member function of the Canvas passed in through the parameter to draw a graph, in fact, we only save the corresponding draw commands and parameters in a Display List. Then run the Display List command through the Display List Renderer. This process is called the Display List Replay.
What are the advantages of introducing the Display List concept? There are two main benefits. The first advantage is that in the next frame painting, if the content of a View does not need to be updated, you do not need to re-create its Display List, that is, you do not need to call its onDraw member function. The second advantage is that in the next frame, if a View only changes some simple attributes, such as the position and Alpha value, there is no need to recreate its Display List, you only need to modify the corresponding attribute in the Display List created last time. This also means that you do not need to call its onDraw member function. These two advantages save a lot of application code execution when drawing a frame of the application window, that is, greatly saving the CPU execution time.
Note that only the View accelerated by hardware is associated with the Render Node, and the Display List is used. We know that currently not all 2D UI rendering commands are supported by GPU. For more information, see the official documentation: http://developer.android.com/guide/topics/graphics/hardware-accel.html. For views that use 2D UI rendering commands that are not supported by GPU, they can only be rendered in software mode. The specific method is to create a new Canvas. The bottom layer of the Canvas is a Bitmap, that is, the painting occurs on this Bitmap. After the painting, the Bitmap is recorded in the Display List of its Parent View. When the Display List Command of the Parent View is executed, the Bitmap recorded in it is then drawn using the Open GL command.
On the other hand, for the TextureView introduced in Android 4.0, It is not drawn through the Display List. Because its underlying implementation is an Open GL texture, you can skip the Display List middle layer to improve efficiency. This Open GL texture is encapsulated by a Layer Renderer. Layer Renderer and Display List Renderer can be viewed as the same level of concept. They are all used to draw the UI element through the Open GL command. However, the former operates on the Open GL texture, while the latter operates on the Display List.
We know that the View of the Android application window is organized in a tree structure. These views, whether through hardware accelerated rendering, software rendering, or a special TextureView, are called during the onDraw of their member functions, they all draw their UI in the Display List of the Parent View. The top-level Parent View is a Root View, and its associated Root Node is called the Root Render Node. That is to say, the Display List of the Root Render Node will contain all the drawing commands in a window. When drawing the next frame of the window, the Display List of the Root Render Node will be drawn in a Graphic Buffer by using an Open GL Renderer. Finally, the Graphic Buffer is handed over to the SurfaceFlinger service for merging and display.
The app UI rendering Mechanism analyzed above does not involve animation. When a View needs to be displayed as an animation, we can obtain a ViewPropertyAnimator by calling the View member function animate. ViewPropertyAnimator is abstracted as a Render Node like a View. However, the processing method of this Render Node is different from that of View's Render Node. They will be registered to the Render Thread of the Android application, the Render Thread is responsible for executing the animation contained in it until the animation ends. In this way, the main thread of the Android application is not required to process animations, so that the main thread of the Android application can process user input more intently, so that the UI of the Android Application can be more responsive.
Furthermore, if we call the ViewPropertyAnimator member function withLayer, the animation of the target View can be further optimized. Recall the features of TextureView. It is drawn directly through the Open GL texture, which saves the Display List step. Similarly, when we call the ViewPropertyAnimator member function withLayer, the Layer Type of the target View will be temporarily changed to LAYER_TYPE_HARDWARE. For a View whose Layer Type is LAYER_TYPE_HARDWARE, it is directly implemented through the Frame Buffer Object (FBO) of Open GL, which can also improve the rendering efficiency. When the animation ends, the Layer Type of the target View is restored to the original Type.
The above is the hardware accelerated rendering framework of the Android application window and animation, and the Render Thread mentioned in it needs further explanation. The Render Thread was introduced in Android 5.0 and used to share the work of the Main Thread of Android applications. Before Android 5.0, the Main Thread of the Android application is not only responsible for rendering the UI, but also for processing user input. By introducing the Render Thread, we can release the Main Thread from the UI rendering work and submit it to the Render Thread for processing. This also enables the Main Thread to process user input with greater focus and efficiency, this improves the UI rendering efficiency and makes the UI more responsive.
The Interaction Model 4 of Main Thread and Render Thread is shown in:
Figure 4 Interaction Model between Main Thread and Render Thread of Android applications
Main Thread is mainly responsible for calling the View member function onDraw to construct their Display List. Then, when the next Vsync signal arrives, A drawFrame command is sent to the Render Thread through a Redner Proxy object. The Render Thread has a Task Queue. The drawFrame Command sent from the Main Thread is saved in the Render Thread's Task Queue, waiting for processing by the Render Thread.
For animation display, the interaction model of Main Thread and Render Thread is shown in Figure 5:
Figure 5 animation interaction model between Main Thread and Render Thread of Android applications
At the Java layer, the animation implemented through the Render Node is abstracted as a Render Node Animator. This Render Node Animator registers a Render Node representing the animation to the Render Thread. The implementation is to append the Render Node to the Root Render Node in the Android Application Window. The Render Thread encapsulates the Render Node into an Animator Handle object internally and executes the animation it describes until the animation ends.
Now, we have introduced the key concepts involved in hardware accelerated rendering of the Android application UI. Next, we will further analyze its implementation based on the following four scenarios:
1. Analysis of Environment initialization process of Android application UI hardware accelerated rendering;
2. Analysis of the Display List construction process for hardware accelerated rendering of the Android application UI;
3. Analysis of the Display List replay process for hardware accelerated rendering of the Android application UI;
4. Analysis of animation Execution Process for Android application UI hardware accelerated rendering.