Performance overview
Why is the program always so slow? What the hell is it doing now? Where did all the time go? Perhaps, you often complain about these problems. If so, it means you have a performance problem with your program. Performance problems in some cases, compared to functional problems, may not be much of a problem, just for a moment, it's gone! However, serious performance problems can cause the program to crash and feign death. Until it crashes. This section will first recognize the performance of the various indicators and performances.
understand the performance of the program
read client programs, poor performance can seriously affect the user experience: interface pauses, jitter (non-action effects), unresponsive and other issues will be the user constantly complaining. A typical example is the Eclipse IDE tool will appear in the full GC program animation phenomenon, I believe a lot of developers are criticized. For the server program, performance issues are even more important, I believe that many background services software has its own performance objectives. In the case of a Web server, the server's response time, throughput degradation, or even throwing out a memory overflow exception, crashes. These are problems that need to be addressed in performance tuning.
In General, the performance of a program is expressed in the following ways:
-Execution Speed: Whether the program responds quickly and whether the response time is short enough.
-Memory allocation: whether memory allocation is reasonable, excessive memory consumption or presence of leaks.
-Start-up time: How long it will take for the program to run from normal to business.
-Load carrying capacity: When the system pressure rises, the system execution speed, response time rise curve is flat.
reference indicators for performance
in order to conduct the performance analysis scientifically, it is very important to evaluate the performance index quantitatively. Currently, some of the performance indicators that can be used for quantitative evaluation are:
-Execution Time: the time that a piece of code runs from the beginning to the end of the run.
-cpu Time: The time the function or thread consumes the CPU.
-Memory allocation: The amount of memory space that the program occupies at run time.
-Hard disk throughput: Describes the usage of I/O.
-Network throughput: Describes the usage of the network.
-Response Time: The time the system responds to a user's behavior or event. The shorter the response time, the better the performance.
Wood barrel principle and performance bottleneck
The barrel principle is also called "Short board Theory", the core idea is: how much a bucket of water, does not depend on the barrel wall the tallest piece of wood, but depends on the bucket wall on the shortest piece. Applying this theory to system performance optimization, it is understood that even if the system has sufficient memory resources and CPU resources, if disk I/O performance is low, the overall performance of the system depends on the current slowest disk I/O speed, rather than the current best CPU or memory. In this case, it is useless to optimize memory or CPU resources if you need to further improve system performance. The overall performance of the system can be optimized only by improving disk I/O performance, at which point the I/O to the disk is a performance bottleneck for the system.
Note: According to the cask principle, the final performance of the system depends on the component with the worst performance of the system, so in order to improve the overall performance of the system, it is necessary to optimize the system's worst-performing components rather than optimizing the components that perform well in the system.
depending on the characteristics of the application, any computer resource can become a system bottleneck. Among them, the computing resources that are most likely to become system bottlenecks are as follows.
1.disk I/O: Because disk I/O reads and writes more than memory, the inefficient I/O operation slows down the entire system if you need to wait for disk I/O to complete while the program is running.
2.network operation: reads and writes network data in a similar situation to disk I/O. Network operations may be slower than local disk I/O due to uncertainties in the network environment, especially when reading and writing data on the Internet. Therefore, without special treatment, it is very likely to become a system bottleneck.
3.CPU: For Applications with high computational resource requirements, contention for the CPU will cause performance problems due to the long and uninterrupted use of CPU resources. such as scientific computing, 3D rendering and other applications of strong CPU demand.
4.Exception: Exception capture and processing is very resource-intensive for Java applications. If the program performs abnormally high frequency, the overall performance will decrease significantly.
5.database: Most applications are dependent on the database, and reading and writing large amounts of data can be quite time consuming. While the application may need to wait for the database operation to complete or return the requested result set, the slow synchronization operation becomes a system bottleneck.
6.Lock Competition: For high concurrent programs, if there is a fierce lock competition, the unconscious of the performance of a great blow. Lock contention will significantly increase the overhead of thread context switching. Moreover, these costs are not related to the application requirements of the system overhead, in vain to occupy valuable CPU resources, but without any benefit.
7. Memory: In general, as long as the application design is reasonable, the existence of read and write speed is unlikely to become a performance bottleneck. These cases are rare unless the application makes high-frequency memory swapping and scanning. The most likely scenario in which memory restricts system performance is insufficient memory size. The size of the memory seems pathetic compared to the disk, which means that the application software can only read the usual core data into memory as much as possible, which reduces the system performance to some extent.
Amdahl Law
Amdahl Law is a very important law in computer science, it defines the calculation formula and theoretical limit of the speedup of serial system after parallelization. Speedup ratio definition: The system time-consuming/optimized system time-consuming after optimization is the ratio of time-consuming and time-consuming before optimization. The higher the speedup, the more obvious the optimization effect. The Amdahl law gives the relationship between speedup and system parallelism and number of processors. Assuming that the speedup ratio is speedup, the program that must be serialized within the system is the F,CPU processor number n, which is:
speedup≤1/(f+ (1-f)/n)
Example: under a dual-core processor, a program execution step consists of 5 steps, of which 3 steps must be serial, so its serialization weight is f=3/5=0.6,n=2. According to this formula, if the number of CPU processors tends to infinity, then the speedup is inversely proportional to the serialization rate of the system, and if 50% code must be executed serially in the system, the system's maximum speedup is 2.
Note: According to Amdahl Law, the use of multi-core CPU to optimize the system, the effect of optimization depends on the number of CPUs and the system's serialization, the more CPU, the lower the number of serialization, the better the optimization effect, only increase the number of CPUs without reducing the program's serialization proportion, Also does not improve system performance.
The above text is organized from: [Java Program Performance Optimization-. Make your Java program faster and more stable]. Ge Yi and other authoring
"Reproduced use, please specify the source:http://blog.csdn.net/mahoking"
Java Program Performance optimization-performance metrics