Android Performance optimization
According to the hierarchy structure of Android, performance optimization is also layered, this article will be application, Framework, Native, kernel each layer to summarize, each layer will mainly from the performance optimization of the basic ideas, optimization techniques, optimization tools, several aspects of the description.
Chapter One Android Application performance Optimization (overview)
Application performance problems are the most obvious, the most easy to reflect the type of expression is also various, for example:
- The application starts slowly for the first time, or enters an interface at a slower speed;
- Start an animated interface, the animation execution process is not smooth, or the animation before the execution of long time;
- ListView list Sliding process, not smooth;
- Application customization of a specific interface execution is slow, such as launcher application desktop left and right sliding effect is not smooth;
- Response to a user event for a long time without response (ANR);
- Operation of the database, the implementation of a large number of data deletion and modification operations, slow execution;
- After long-time operation, the phenomenon of Kaka appears randomly.
In addition to the complex form of expression, the reasons are complex. The above problem may be more than one cause, and in many cases it is not the problem of the application itself, there may be other levels of the system problems, but reflected in the application layer. Therefore, the application layer is always the first, the developer in dealing with performance problems, the need to do is to determine whether the application itself caused by the performance problems, and then the appropriate remedy, but sometimes the application itself logic is normal, is obviously the system's hardware configuration is not enough, at this time according to product or project needs, Take a more aggressive approach to optimizing performance to compensate for the lack of hardware configuration.
Here are some ways to summarize application performance optimizations from several different perspectives.
First, the basic idea
Performance optimizations for application tiers can often be considered in the following ways:
1. Understand the compiling principle of programming language, and use high-efficient coding method to improve program performance from syntax;
2. The use of reasonable data structures and algorithms to improve program performance, which is often the key to determine the performance of the program;
3. Focus on the optimization of the interface layout;
4. Adopt multi-thread, cache data, delay load, advance load and other means to solve serious performance bottleneck;
5. Reasonable allocation of virtual machine heap memory usage limit and usage rate, reduce garbage collection frequency;
6. Reasonable use of native code;
7. Reasonably Configure the database cache type and optimize the SQL statement to speed up the read speed, using transactions to speed up the write speed;
7. Use tools to analyze performance problems and identify performance bottlenecks;
There are certainly a number of other performance optimization methods, and here are just a few of the methods that are often used. Confined to the space, the following will be part of the content of the introduction, I hope to be able to do performance optimization work has helped.
Second, programming skills
(i) Performance Tips (for Java)
Google official online has some tips on the application performance improvement, before the company also has a lot of summary mentioned, in this simple list, detailed content can be obtained from the official website.
Http://developer.android.com/training/articles/perf-tips.html
It is important to note that the optimization techniques listed in the article are mainly some minor performance improvements, and that determining the overall performance of the program still depends on the business logic design of the program, the data structure of the code, and the algorithm. Developers need to apply these optimization techniques to the usual coding process, adding up, and will have a great impact on performance.
Writing efficient code requires following two principles:
- Do not perform unnecessary operations;
- Do not allocate unnecessary memory;
Two principles, respectively, for CPU and memory, to complete the necessary operations under the premise of saving CPU and memory resources, natural execution efficiency is high. Simple to say that sounds very empty, after all, there is no unified standard to judge what is necessary and unnecessary, need to combine the specific circumstances of the specific analysis.
1. Avoid creating unnecessary objects
Creating too many objects can cause poor performance, which everyone knows, but why? It takes time to allocate the memory itself first, and then the heap memory usage is capped when the VM runs, and garbage collection is triggered when the usage reaches a certain level, and garbage collection causes the thread or even the entire process to pause. It is conceivable that if objects are created and destroyed frequently, or memory usage is high, the application will be severely stuck.
2. Rational use of static members
There are three main points to be mastered:
- If a method does not need to manipulate dynamic variables and methods at run time, you can set the method to static.
- The constant field is declared as "static final" because such constants are directly accessed by the static field initializer of the Dex file, otherwise it needs to be initialized at run time by some functions that are automatically generated at compile time. This rule is valid only for base types and string types.
- Do not declare the view control as static, because the View object references the Activity object, and the object itself cannot be destroyed when the activity exits, causing a memory overflow.
3. Avoid internal getters/setters
In object-oriented design, field access using Getters/setters is often a good rule, but limited to hardware conditions in Android development, unless the field needs to be publicly accessed, it is not recommended to use a limited range of internal access (for example, in-package access) getters/ Setters. When JIT is turned on, direct access is 7 times times faster than indirect access.
4. Using the For-each loop
Priority use of the For-each loop usually gets higher efficiency, except in the case where the ArrayList is traversed with a more efficient manual count cycle.
5. Use the package instead of private so that the private inner class can access the external class members efficiently
The private inner class's methods access the private member variables and methods of the external class, are syntactically correct, but the virtual machine is not directly accessible at run time, but at compile time, some of the package-level static methods are automatically generated in the external class, and the internal classes invoke these static methods to access the private members of the external class when executed. In this way, there is a layer of method calls, performance loss.
One way to solve this problem is to change the private member of the external class to the package level, so that the inner class can be accessed directly and, of course, if it is designed to be acceptable.
6. Avoid using floating-point types
In the experience, floating-point types in Android devices are probably twice times slower than integer data processing, so don't use floating-point types if the integer type can solve the problem.
In addition, some processors have hardware multiplication but no division, in which case the division and modulo operations are implemented by software. To improve efficiency, you might consider rewriting some division operations directly into multiplication implementations when writing expressions, such as "X/2" to "X * 0.5".
7. Understand and use library functions
The Java standard Library and the Android framework contain a large number of efficient and robust library functions, many of which are implemented in native, and are often much more efficient than the code we use to implement the same functionality in Java. So good at using system library functions can save development time, and it is not easy to make mistakes.
(ii) Optimization of layout performance
The layout directly affects the display time of the interface. There is no technical difficulty in the performance optimization of the interface layout, and the most important thing for the individual is to recognize the importance of layout optimization. At first I would also feel that the layout itself is not a performance bottleneck, and it is difficult to optimize, very hard to write a complex layout file, or the native code is that, and also use log to see the setcontentview time, there seems to be no problem, really do not want to study. But in fact the layout problem is not as simple as it might seem.
The performance optimization of a layout is important because of the following two areas:
· The layout file is an XML file, and the inflate layout file is actually parsing the XML, creating the corresponding layout object and associating it according to the tag information. The more tags and attribute settings in XML, the deeper the depth of the node tree, the more the judgment logic, the nesting and recursion of the function are to be executed, so the more time is consumed;
· Inflate operation is only the first link of the layout effect, an interface to display, after Requestlayout also perform a series of measure, layout, draw operation, each step of the execution time will be affected by the layout itself. The final display of the interface is implemented after all of these operations have been completed, so if the layout quality is poor, the time cost of each step will increase, and the final display time will be longer.
So how is the layout optimized? Here are a few points to summarize:
1. Follow one rule: minimal layout levels
In other words, the depth of the tree in the XML file is as deep as possible while the same layout effect is achieved. To do this, you need a reasonable use of layout controls:
- The typical case is that you can use relativelayout instead of linearlayout to achieve the same layout effect;
- Another is that if the a node of the layout tree has only one child node B, and B has only one child node C, then B is usually removed;
- Reasonable use of <merge> tags, if layout x can be include in Y, then you need to consider whether the root node of x can be set to <merge>, so that when parsing the child nodes of <merge> will be added to Y, and < Merge> itself is not added.
2. Analyzing layouts using lint
Lint is a tool in the tools directory of the SDK, and ADT integrates the lint visual control interface. Using the lint scanning application, it analyzes the application in many ways and hints at areas that may be defective, including performance-related content. You can learn more on the Google website.
Http://developer.android.com/tools/debugging/improving-w-lint.html
Http://developer.android.com/tools/help/lint.html
3. Analyzing layouts using Hierarchyviewer
Hierarchyviewer (hereinafter referred to as HV) is also a tool in the tools directory of the SDK, and the visual control interface of HV is integrated in ADT. You can use HV to view the layout of the current interface, which can provide a lot of information, two of which can help us analyze performance issues:
· The tree view of HV shows the interrelationship of the view controls and can be used to check for the situation mentioned in the 1th.
· The tree view can display each node measure, layout, draw time, and each item with a dot to indicate whether it is time-consuming is normal, each dot with green, yellow, red indicates time-consuming, warning, dangerous, so it is convenient to find a performance bottleneck. If these times are not displayed in the tree view, you can click on the "Obtain layout time for the tree rooted at selected node" button to refresh the interface display.
Http://developer.android.com/tools/debugging/debugging-ui.html
4. Load the view with viewstub delay
Viewstub is a lightweight view with no size and no nesting or rendering of anything in the layout. If you have some view controls in the interface that do not need to be displayed immediately, you can write them to a separate layout file, replacing them with viewstub tags, and then loading the view through Viewstub when you want to actually display the content.
Http://developer.android.com/training/improving-layouts/loading-ondemand.html
Third, the use of tools
Following good coding habits can make program execution more efficient, but still run into a variety of performance issues. Fortunately, there are a lot of powerful tools that can help us analyze performance bottlenecks and find out where the problem lies. The tools described below must be familiar to you, there are a lot of related articles on the Internet are very good, here is not to repeat, only these tools in use some of the key points to do some clarification. For more information on how to use these tools, please see an article on the Web: http://blog.csdn.net/innost/article/details/9008691.
(a) Traceview
The most direct way to do performance optimization is to reuse the scene of existing performance problems, and to monitor the execution process of the program, if it is convenient to analyze the function's call relationship and execution time in the program, it is easy to find the performance bottleneck.
TraceView is the tool used to analyze the function call process, which makes it easy to analyze performance problems. The way it is used requires the following steps:
- Use the Android debug API, or the DDMS monitor program to run the process;
- Complex existing performance problem scenario, the 1th step method to obtain the program process of the function call log file, that is, trace file;
- Use TraceView to import trace files;
The interface of TraceView is intuitive, but in the process of analysis, special attention should be paid to the following points:
1. The meaning of the columns in the profile Panel:
· incl– refers to the execution time of the function itself and other functions nested within it;
· EXCL-refers to the function itself, does not include the execution time of other functions nested inside;
· CPU time– refers to the sum of the CPU time slices occupied by the function execution, and does not include the time to wait for scheduling;
· Real time– refers to the actual time of the function execution process, including the time to wait for scheduling;
· CPU time/call– refers to the average CPU time of each call of the function;
· Real time/call– refers to the actual time of the function averaging each call;
· Calls+recur calls/total– refers to the total number of function calls + the percentage of recursive calls;
· %-a column with a% is a percentage of the total sampling time that the execution time of the function takes;
2. How to analyze performance bottlenecks
The first thing you usually need to worry about is CPU time, you can find out the problem of the program itself, the real time will be affected by other factors of the system. You can then analyze from four aspects:
1) analyze which functions have a single execution time
You can click on the "Cpu time/call" column, in descending order, and find out those execution time is relatively long and we are concerned about the function, and then see its function inside the detailed execution process;
2) analyze which function calls are too many
You can click on the "Calls+recurcalls/total" column, in descending order, and find out which executions are relatively more and we are concerned about the function, and then see its function inside the detailed execution process;
3) analyze which functions have a long total execution time
Some functions of the single execution time is not particularly long, the total number of calls is not particularly many, but the total execution time of the multiplication is longer, you can click on the "Incl Cpu times", in descending order to find these functions;
4) Sometimes it's clear that we need to look at some specific methods of a particular class and search in the search bar at the bottom of the page, but it seems to support all lowercase input only.
3. tip : When using the API or tool to sample trace information, the JIT function is disabled, and because the sampling itself also requires system resources, so using TraceView to see the function execution time is much slower than normal operation, We only care about the relative time consumption.
(ii) Dmtracedump
Trace files can be used in addition to TraceView analysis, but also with the use of another tool dmtracedump, it has a very powerful function. If you feel the pain of finding classes and functions in TraceView, try this tool.
Dmtracedump is an executable file under the Tools directory of the SDK, you can view its Help information and execute a command similar to the following:
Dmtracedump-h-G tracemap.png path-to-your-trace-file > path-to-a-html-file.html
Then you can get two things, one is the tree of each function call, you can see the function relation at a glance, and the other is an operable HTML file, which can be easily found by the browser to find the class or function you care about.
(c) Systrace
Systrace is a powerful performance analysis tool introduced from 4.1, which relies on the ftrace function of kernel to perform performance analysis on many important modules in the system, especially the graphics display module. Features include tracking the I/O operations of the system, kernel work queues, CPU load, and the health of various Android subsystems.
The use of Systrace is also required by the Android provided by the API or DDMS to turn on tracking monitoring mode, and then run the program to generate log files, and finally analyze the log file. The systrace output is an HTML file that can be viewed directly from the browser. If you use the latest version of ADT, you can easily get through the interface without having to use the command again. More detailed content can be searched on the internet.
Iv. Considerations on Performance optimization
Performance optimization is a big topic, in addition to discussing how to optimize, there is a more important is whether the need for optimization. As early as several decades ago, there was a lot of discussion about performance optimization, and then came to a deep truth: optimization is more likely to bring harm than benefit, especially immature optimizations. During the optimization process, the software you produce may be neither fast nor correct, and it is not easy to fix.
Do not sacrifice a reasonable structure for performance. Try to write a good program rather than a fast program.
However, this does not mean that you can ignore performance issues before you complete the program. Implementation problems can be corrected through later optimizations, but structural flaws that are pervasive and limit performance are almost impossible to correct unless the program is rewritten. Changing a basic aspect of your design after the system is complete can cause your system to be structurally ill-conditioned and difficult to maintain and improve. So you should consider performance issues during the design process.
Try to avoid the design that limits performance. Consider the performance consequences of your code design. It is a very bad idea to change the code to get good performance. The performance needs to be measured before and after each optimization.
Transferred from: http://rayleeya.iteye.com/blog/1961005
"Android Development" Android performance optimization