Google has just released a few days ago in the 2nd season of Android Performance optimization class, a total of 20 short videos, including the content is roughly: battery optimization, network optimization, wear on how to optimize, using object pooling to improve efficiency, LRU cache,bitmap scaling, caching, reuse, PNG compression, custom view performance, improved rendering performance of view after setting Alpha, and the use of tools such as Lint,stictmode and more.
(1) Battery Drain and Networking
For mobile phone programs, network operation is relatively power consumption behavior. Optimizing network operation can significantly save power consumption. In performance optimization in the 1th quarter, it is mentioned that the power consumption of the various modules of the mobile phone hardware is not the same, wherein the mobile cellular module on the electricity consumption is relatively large, in addition to the cellular module in different operating intensity, the consumption of electricity is also different. Before the program wants to perform a network request, it needs to wake up the device, send the data request, wait for the data to return, and then slowly go into hibernation. The process is as follows:
In the above process, the power consumption difference of the cellular module is as follows:
As can be seen from the diagram, the activation of the moment, the moment the data is sent, the moment of receiving data has a lot of power consumption, so we should be from how to pass the network data and when to initiate the network request both aspects to start optimization.
(1.1) When to initiate a network request
First we need to distinguish which network requests are required to return results in a timely manner and which can be deferred. For example, the user actively pulls down the refresh list, which requires the immediate triggering of a network request and waits for the data to be returned. However, for uploading user action data, synchronization program settings and so on behavior is a can be deferred behavior. We can use the Battery historian tool to view the power consumption of mobile cellular modules (for details on this section, click on Android Performance optimized power). In the mobile radio line will show the power consumption of the cellular module, the red part of the module is working, the middle of the interval part of the module is dormant, if you see a period of time, red and the frequency of frequent occurrences, it shows that there is a behavior can be optimized. As shown in the following:
For the part that can be optimized above, we can put the request behavior into a specific bundle, delay to a moment to unify the request. As shown in the following:
After the above optimization, we go back to use battery historian to export the power consumption graph, you can see the wake state and sleep state is a continuous chunk interval, so that the overall power consumption will become less.
Of course, we can even delay the requested task until the mobile phone network switches to WiFi, and the phone is charged and then executed. In the previous description process, one of the challenges we encounter is how to delay network requests and execute them in batches. Fortunately, Android offers Jobscheduler to help us achieve this goal.
(1.2) How to deliver network data
About this part will mainly involve prefetch (prefetch) and Compressed (compression) of the two technologies. For the use of prefetch, we need to pre-determine the user after this operation, the subsequent scattered requests are likely to be triggered immediately, you can use the next 5 minutes may be used by the scattered requests are once centralized execution completed. For the use of compressed, before uploading and downloading data, using CPU to compress and decompress the data can greatly reduce the time of network transmission.
To know the time the network request occurred in our application, the amount of data requested each time, and so on, you can view detailed data through the networking traffic tool in Android Studio, as shown in:
(2) Wear & Sensors
On Android Wear will be a lot of use of sensors to achieve some special features, how to conserve power in the premise of the use of good sensor will be our special attention to the problem. Here are some examples of best practices on Android Wear.
Minimize refresh requests, for example, we can log off the listener as soon as possible without some data, reduce the refresh rate, do batch processing of sensor data, and so on. So how do you do these optimizations?
First, we need to try to use the existing motion data provided by the Android platform, rather than to implement the listening data itself, because most Android watch itself records the sensor data behavior is a power optimization.
Second, when the activity does not need to listen to some sensor data, you need to release the listener registration as soon as possible.
And we need to try to control the frequency of updates, only to trigger the operation to get the latest data when the display data needs to be refreshed.
In addition, we can do batch processing for sensor data, and update to the UI when the data accumulates a certain number of times or some degree.
Finally, when watch is connected to the phone, it is possible to hand over certain complex operations to the phone, and watch only needs to wait for the returned results.
To learn more about sensors, you can click here
(3) Smooth Android Wear Animation
Android Material Design-style applications use a lot of animation for UI switching, and optimizing the performance of animations not only improves the user experience but also reduces the power consumption, and introduces some easy ways to do so.
A relatively heavy task in Android is to rotate, scale, crop, and so on for bitmap. For example, on a circular clock chart, we take the clock's pointer out as a separate picture and rotate it by 56% higher than the frame rate formed by rotating a complete circular chart.
In addition, to minimize the elements of each redraw can greatly improve performance, if a clock interface has a lot of complex components need to be displayed, we can split these components, such as the background image is carried out separately set as a separate view, through the Setlayertype () Method makes this view mandatory for rendering with hardware. As to which elements of the interface need to be split, the frequency of their respective updates needs to be discussed separately.
How to use tools such as systrace to view the rendering performance of certain views, as mentioned in the previous chapters, interested can click here
For most animations in the application, we use Propertyanimation or viewanimation to operate the implementation, the Android system will automatically do some optimization processing of these animation, Most of the performance optimizations learned on Android also apply to Android Wear.
For more information on the animation effects in Android Wear, click the watchface example.
(4) Android Wear Data Batching
In Android Training there is a course on wear how to use the wearable API to communicate and collaborate with the phone (click here for details). Because the phone CPU and power are more powerful than wear, and the phone can also be directly connected to the network, and wear to access the network is relatively more difficult, so we in the development of wear applications need to do as far as possible to do the complex operation to the phone to execute. For example, we can let the phone get the weather information and then return the data to wear for display. Further, in previous performance optimization courses we have learned how to use Jobscheduler to delay batch processing tasks, assuming that one of the tasks that the phone receives from wear is to check the weather conditions every 5 minutes, So when the phone uses Jobscheduler to perform a check on the weather task, first determine whether the results of this return and before the difference, only when the weather changes, it is necessary to notify the results to wear, or just to change a certain data to inform wear, This reduces the power consumption of the wear to a greater extent.
Below we summarize how to optimize the performance and power of wear:
Make a request only when you really need to refresh the interface
Try to hand over the task of calculating complex operations to the phone to handle
Phone only notifies wear when the data is changed
Bundling fragmented data requests together for operation
(5) Object pools
One of the frequently encountered problems in a program is the creation of a large number of objects in a short period of time, resulting in memory tension that triggers GC to cause performance problems. For this problem, we can use the object pooling technique to solve it. Typically objects in an object pool may be bitmaps,views,paints, and so on. For an idea of how the object pool works, do not unfold, see the following illustration:
There are many advantages to using object pooling technology, which avoids memory jitter and improves performance, but there are some things that require special attention when you use it. Normally, the initialized object pool is blank, when an object is used, the object pool query is present, if it does not exist then it is created and then joined to the object pool, but we can also populate the object pool with some data that is about to be used in advance when the program is started. This allows for faster first-time loading when you need to use these objects, which is called pre-allocation . Using object pooling also has a bad side, programmers need to manually manage the allocation and release of these objects, so we need to use this technique carefully to avoid the memory leaks of objects. To ensure that all objects are released correctly, we need to ensure that the objects that join the object pool and other external objects do not have a mutually referenced relationship.
(6) to Index or iterate?
Traversing a container is a frequently encountered scenario in programming. In the Java language, using iterate is a more common approach. However, in the Android development team, we try to avoid using iterator to perform traversal operations. Let's look at three different traversal methods that might be used on Android:
Using the same data set for testing on the same phone in the three ways above, their performance is as follows:
From the above can be seen for the way for index more efficient, but because of different platform compiler optimizations vary, we'd better do a simple measurement for the actual method is better, after the data, then choose the most efficient way.
(7) The Magic of LRU Cache
This section we are going to discuss is the cache algorithm, on Android above the most commonly used cache algorithm is LRU (Least recently use), about the LRU algorithm, do not expand the narration, with the following diagram illustrates the meaning:
The basic construction usage of LRU cache is as follows:
In order to set a reasonable cache size value for the LRU cache, we usually define it in the following way:
In order for the cache to know the exact size of each added item when using the LRU cache, we need to override the following method:
Using LRU cache can significantly improve the performance of your application, but you also need to be aware of the recovery of obsolete objects in the LRU cache, which can cause serious memory leaks.
(8) Using LINT for performance Tips
Lint is a powerful tool for Android to provide a static scan of the application source code and identify potential problems in it.
For example, if we perform the operation of the new object inside the OnDraw method, lint will prompt us for performance problems and propose a corresponding proposal. Lint has been integrated into Android studio, we can manually trigger the tool, click on the tool bar analysis, Inspect Code, after triggering, lint will start to work, and output the results to the bottom of the toolbar, We can view the reasons one by one and make the corresponding optimization changes according to the instructions.
Lint is so powerful that he can scan all kinds of problems. Of course we can find lint through the Android studio settings, lint do some custom scanning settings, you can choose to ignore those who do not want to lint to scan the option, we can also modify its prompt priority for the partial scan content.
It is recommended that you mark the severity of the memory-related options as red error, and the performance issue for layout is marked with a yellow warning.
(9) Hidden cost of Transparency
This section describes how to reduce the effect of transparent areas on performance. In general, for an opaque view, it only needs to be rendered once, but if the view is set with an alpha value, it will need to be rendered at least two times. The reason is that the view containing alpha needs to know in advance what the next layer of the hybrid view is, and then blend in with the upper view.
In some cases, a view that contains alpha may trigger a change to the view on the Hierarchyview and the parent view is redrawn again. Let's look at an example where the picture in the ListView and the two-level headings are set to transparency.
In most cases, the elements on the screen are rendered backwards and forwards. In the illustration above, the background map (blue, green, red) is rendered first and then the avatar image is rendered. If the post-rendered element has an alpha value set, the element will be blend with the rendered element on the screen. Most of the time, we will set alpha for the entire view to achieve fading animation effect, if we figure in the ListView to do a gradual reduction of alpha processing, we can see the ListView on the TextView and so on, and so on the component will gradually fused to the background color. But in this process, we can not see that it has actually triggered the additional drawing task, our goal is to make the whole view gradually transparent, but during the time the ListView does not stop doing blending operation, this will lead to a lot of performance problems.
How to render to get the effect we want? We can first draw the elements on the view in the usual way from the back to the front, but not directly to the screen, but after using GPU preprocessing, and then GPU rendering to the screen, the GPU can be directly on the interface of the original data rotation, set transparency and so on. Rendering using the GPU, although the first operation is more time-consuming than drawing directly to the screen, once the raw texture data is generated, the next operation is less time-saving.
How can I get the GPU to render a view? We can specify how the view should be rendered by Setlayertype, starting with SDK 16, and we can use Viewpropertyanimator.alpha (). Withlayer () to specify. As shown in the following:
Another example is a view with a shaded area, and this type of view doesn't have the problem we mentioned earlier because they don't have cascading relationships.
To make the renderer aware of this situation and avoid taking extra GPU memory space for this view, we can do the following settings.
With the above settings, performance can be significantly improved, as shown in:
(ten) Avoiding allocations in OnDraw ()
We all know that we should avoid doing the operations that cause memory allocations in the OnDraw () method, and explain why you need to do this.
The first OnDraw () method is to execute on the UI thread, and in the UI thread try to avoid doing anything that might affect performance. Although allocating memory does not require too much system resources, this does not mean that there is no cost for free. The device has a certain refresh frequency, resulting in the view of the OnDraw method will be frequently called, if the OnDraw method is inefficient, under the effect of frequent refresh, inefficient problems will be expanded, and then have a serious impact on performance.
If the operation of memory allocation inside the OnDraw is prone to memory jitter, the GC is frequently triggered, although the GC is later improved to perform on another background thread (the GC operation was synchronized before 2.3, followed by concurrency), but frequent GC operations still affect the CPU. Affects the amount of power consumed.
The simple solution to the frequent allocation of memory is to move the allocation operation out of the OnDraw () method, and typically we move the new paint operation inside the OnDraw () to the outside, as shown here:
(one) Tool:strict Mode
If the UI thread is blocked for more than 5 seconds, the ANR will appear, which is too bad. It is important to prevent the program from appearing ANR, so how to find out the potential pits inside the program and prevent the ANR? A lot of the time, but they are likely to have huge hidden dangers that can easily lead to ANR outbreaks.
Android provides a tool called strict mode, and we can open the strict mode option by using the developer option in the phone settings, and the screen flashes red if the program has potential pitfalls. We can also use the Strictmode API at the code level to do detailed tracking, you can set Strictmode monitor those potential problems, how to remind the developer when the problem, you can flash red on the screen, you can also output error logs. The following is an official code example:
public void onCreate() {
if (DEVELOPER_MODE) {
StrictMode.setThreadPolicy(new StrictMode.ThreadPolicy.Builder()
.detectDiskReads()
.detectDiskWrites()
.detectNetwork() // or .detectAll() for all detectable problems
.penaltyLog()
.build());
StrictMode.setVmPolicy(new StrictMode.VmPolicy.Builder()
.detectLeakedSqlLiteObjects()
.detectLeakedClosableObjects()
.penaltyLog()
.penaltyDeath()
.build());
}
super.onCreate();
}
() Custom views and performance
The Android system offers more than 70 standard view, such as Textview,imageview,button and more. At some point, these standard view does not meet our needs, then we need to implement a view ourselves, this section will describe how to optimize the performance of custom view.
In general, for a custom view, we may make the following three errors:
useless calls to OnDraw (): we know that calling View.invalidate () will trigger the redraw of the view, there are two principles to follow, The 1th one is to trigger the Invalidate method only when the content of the view changes, and the 2nd is to use Cliprect and other methods to improve the performance of the drawing.
useless pixels: reduce unnecessary drawing elements when drawing, for those invisible elements, we need to avoid redrawing as much as possible.
wasted CPU cycles: for elements not on the screen, you can use Canvas.quickreject to remove them to avoid wasting CPU resources. In addition, the GPU is used to render the UI as much as possible, which can greatly improve the overall performance of the program.
Finally, always keep in mind, as far as possible to improve the view of the drawing performance, so as to ensure that the interface refresh frame rate as high as possible.
Batching Background Work Until later
Most of the time when optimizing performance is about reducing unnecessary operations, but it is also important to choose when to perform certain operations. To avoid excessive power consumption in our applications, we need to learn how to package the background tasks in batches and choose a suitable time to trigger execution. is the amount of power consumed by each application performing its own background tasks:
Because doing things like the above can lead to a lot of wasted power, what we need to do is defer some of the application tasks and wait until the time is right for these tasks to be processed together. The results are as follows:
There are typically three ways to perform deferred tasks:
(1) Alarmmanager
Use Alarmmanager to set up timed tasks, you can select an exact interval, or you can select a non-precise time as a parameter. Unless the program has a strong need to use precise timing wakeup, no one must avoid using him, we should try to use a non-precise way.
(2) SyncAdapter
We can use SyncAdapter to add the settings account to the app, so we can find our app in the list of accounts set up by the phone. This approach is more functional, but it is more complex to implement. We can see the official training courses from here: http://developer.android.com/training/sync-adapters/index.html
(3) Jobschedulor
This is the simplest and most efficient approach, where we can set the interval for task delays, execute conditions, and increase the retry mechanism.
(+) Smaller Pixel Formats
Common PNG,JPEG,WEBP and other format of the picture in the set to the UI before the decoding process, and the decompression can choose a different decoding rate, the different decoding rate of memory consumption is a big difference. Minimize memory footprint without compromising image quality, which can significantly improve the performance of your application.
Android's heap space is not automatically compatible with compression, meaning that if the image in the heap space is retracted, the area does not have to be re-sorted with other reclaimed areas, so when a larger picture needs to be placed in the heap, It is likely that there is no such large contiguous idle area, then a GC is triggered so that the heap frees up an empty area sufficient to drop the image, and oom will occur if it cannot be freed. As shown in the following:
So in order to avoid loading a large picture, you need to minimize the amount of memory used in this image, Android provides 4 decoding formats for the images, each of which occupies the same memory size as shown:
As decoding consumes less memory, there is a loss of clarity. We need to do different processing for different application scenarios, large and small graphs can use different decoding rate. In Android, you can set the decoding rate by using the following code:
(Smaller) PNG Files
Minimizing the size of PNG images is a very important specification in Android. Compared to the jpeg,png can provide a more clear lossless picture, but the PNG format picture will be larger, occupy more disk space. Whether to use PNG or JPEG, designers need to carefully measure, for those who use JPEG can achieve visual effects, you can consider using JPEG. We can search through Google for a lot of PNG compression tools, as shown in:
Here's a new picture format: WEBP, a new type of image format that has the advantage of preserving the PNG format and reducing the size of the image, introduced by Google. For more details on WEBP, please click here
(+) Pre-scaling Bitmaps
Zoom to Bitmap, which is also the most encountered problem in Android. The significance of scaling to bitmap is obvious, prompting for performance and avoiding the allocation of unnecessary memory. Android provides an out-of-the-box bitmap scaling API, called Createscaledbitmap (), which allows you to get a scaled picture using this method.
The above method can quickly get a scaled image, but this method can be implemented on the premise that the original image needs to be loaded into memory beforehand, if the original image is too large, it is likely to cause oom. Here are a few other ways to scale your pictures.
Insamplesize is able to zoom in and out to display images, while avoiding the drawbacks of having to load the original image into memory first. We'll use a similar approach to scaling bitmap:
In addition, we can use the properties of the inscaled,indensity,intargetdensity to manipulate the decoded image, as shown in the source code:
Another frequently used technique is injustdecodebounds, which uses this property to try to decode a picture in advance to get the size of the image without using any memory. As shown in the following:
(+) re-using Bitmaps
We know that bitmap will take up a lot of memory space, this section will explain what is the Inbitmap property, how to use this property to improve the efficiency of bitmap cycle. Earlier, we introduced the use of object pooling techniques to solve the problem of efficient object creation and recycling, using this method, bitmap occupies a memory space is almost constant value, each time the newly created bitmap will need to occupy a separate memory area, as shown in:
To solve the problem of efficiency shown, Android introduced the Inbitmap attribute when decoding a picture, using this property to get the effect shown:
Using the Inbitmap property, you can tell the bitmap decoder to try to use an existing memory area, and the newly decoded bitmap will attempt to use the pixel data memory area occupied by bitmap in the heap. Instead of asking for memory to reapply an area to store bitmap. With this feature, even thousands of images will only need to occupy the memory size of the number of images that the screen can display. Here is a code example of how to use Inbitmap:
There are several restrictions that you need to be aware of using Inbitmap:
The bitmap size of the reuse must be consistent between the SDK and 18, for example, if the Inbitmap is assigned a picture size of 100-100, then the bitmap of the new application must also be 100-100 to be reused. Starting with SDK 19, the bitmap size of the new application must be less than or equal to the bitmap size already assigned.
The new application bitmap and the old bitmap must have the same decoding format, for example, everyone is 8888, if the previous bitmap is 8888, then can not support 4444 and 565 format bitmap.
We can create an object pool that contains a number of typical reusable bitmap so that subsequent bitmap creation can find the right "template" to reuse. As shown in the following:
Google introduces an open source library of loading bitmap: Glide, which contains a variety of optimization techniques for bitmap.
() the Performance Lifecycle
Most developers do not devote much effort to performance optimization until they find a serious performance problem, and often the focus is on whether the feature is implemented or not. Don't be flustered when performance problems really come up. We typically use the following three steps to address performance issues.
Gather: Collecting data
We can collect Cpu,gpu, memory, power and other performance data through the many tools available in the Android SDK.
Insight: Analyzing data
Through the above steps, we obtain a large amount of data, the next step is to analyze the data. Tools help us generate a lot of readable forms, we need to know how to view the data of the table, the meaning of each representation, so that we can quickly locate the problem. If you don't find the problem after analyzing the data, you can just keep collecting the data again, and then analyze it, so it loops.
Action: Fix the problem
After locating the problem, we need to take action to solve the problem. Before solving the problem, there must be a plan to assess whether the solution is feasible and to solve the problem in a timely manner.
() Tools not Rules
Although many of the debugging methods, processing techniques, specifications, and so on are described earlier, this does not mean that all situations are applicable, and we still need to make specific and flexible handling of the situation.
Android App Performance Optimization summary (Google's official Android performance Optimization Model-2nd quarter)