In the Java compilation system, a Java source code file becomes a computer executable machine instruction process, need to go through two pieces of compilation, the first paragraph is to convert. java files into a. class file. The second compilation is the process of converting. class to machine instructions.
The first compilation is the Javac command.
In the second compilation phase, the JVM interprets the bytecode by interpreting it into its corresponding machine instruction, read it by article, and interpret the translation on a clause by article. Obviously, after interpretation, the execution speed must be much slower than the executable binary bytecode program. This is the function of the traditional JVM interpreter (interpreter). In order to solve this efficiency problem, JIT (instant compile) technology was introduced.
With the introduction of JIT technology, Java programs are interpreted by the interpreter, and when the JVM discovers that a method or block of code is running particularly frequently, it is considered a "hot spot code". The JIT then translates some of the "hotspot Code" into the machine-related code of the local machine, optimizes it, and caches the translated machine code for the next use.
Because of the content of JIT compilation and hot spot detection, I have introduced in-depth analysis of the Java compiler principle, here is not to repeat, this article mainly to introduce the next JIT optimization. The most important one in JIT optimization is the escape analysis.
Escape analysis
With regard to the concept of escape analysis, it is not always possible to refer to an object that is not necessarily a memory allocated on the heap, here is a brief review:
The basic behavior of escape analysis is to analyze the object's dynamic scope: When an object is defined in a method, it may be referenced by an external method, for example, as a calling parameter to another place, called a method escape.
For example, the following code:
public static StringBuffer Craetestringbuffer (string s1, string s2) {StringBuffer sb = new StringBuffer (); Sb.append (S1); Sb.append (S2); return SB;} public static string Createstringbuffer (string s1, string s2) {StringBuffer sb = new StringBuffer (); Sb.append (S1); Sb.ap Pend (S2); return sb.tostring ();}
The SB in the first piece of code escapes and the SB in the second code does not escape.
Using escape analysis, the compiler can optimize the code as follows:
One, synchronous omission. If an object is found to be accessible only from one thread, then the operation of the object can be synchronized without regard to it.
Second, the heap allocation is converted to stack allocation. If an object is assigned in a subroutine, so that pointers to that object never escape, the object may be a candidate for stack allocation, not a heap allocation.
Third, detach object or scalar substitution. Some objects may not need to exist as a continuous memory structure can also be accessed, then the object part (or all) can not be stored in memory, but stored in the CPU register.
When Java code is running, the JVM parameter allows you to specify whether to turn on escape analysis,
-xx:+doescapeanalysis: Indicates open escape analysis
-xx:-doescapeanalysis: Indicates that the off-run analysis has been started by default from JDK 1.7, and needs to be specified-xx:-doescapeanalysis
Synchronous ellipsis
When dynamically compiling synchronization blocks, the JIT compiler can use escape analysis to determine whether the lock object used by the synchronization block can only be accessed by one thread and not be published to another thread.
If the lock object used by the synchronization block is confirmed to be accessible only by a thread through this analysis, then the JIT compiler cancels the synchronization of this part of the code when compiling the synchronization block. The process of canceling synchronization is called synchronous ellipsis, also known as lock elimination.
such as the following code:
public void F () {Object Hollis = new Object (); synchronized (Hollis) {System.out.println (Hollis);}}
The Hollis object is locked in the code, but the lifetime of the Hollis object is only in the F () method and is not accessed by other threads, so it is optimized during the JIT compilation phase. Optimized to:
public void F () {Object Hollis = new Object (); System.out.println (Hollis);}
Therefore, when using synchronized, if the JIT after the escape analysis found and wireless security problems, it will do lock elimination.
Scalar substitution
Scalar is a data that can no longer be decomposed into smaller data. The original data type in Java is scalar. In contrast, the data that can be decomposed is called the aggregation amount (Aggregate), and the object in Java is the aggregation amount, because he can decompose into other aggregates and scalars.
In the JIT phase, if a runaway analysis is found that an object will not be accessed by the outside world, then the JIT-optimized object will be replaced by several member variables contained in it. This process is a scalar substitution.
public static void Main (string[] args) {alloc ();} private static void Alloc () {Point point = new Point; System.out.println ("point.x=" +point.x+ "; point.y=" +point.y);} Class point{private int x; private int y;}
In the above code, the point object does not escape from the Alloc method, and the point object can be disassembled into scalars. Then, the JIT will not directly create a point object, but instead directly use two scalar int x, int y to replace the point object.
The above code, after being replaced by a scalar, becomes:
private static void Alloc () {int x = 1; int y = 2; System.out.println ("point.x=" +x+ "; point.y=" +y);}
As you can see, point, after escaping analysis, finds that he has not escaped and is replaced with two aggregates. So what are the benefits of scalar substitution? is to significantly reduce the heap memory footprint. Because once you don't need to create an object, you no longer need to allocate heap memory.
Scalar substitution provides a good basis for on-stack allocation.
Allocate on stack
In a Java virtual machine, objects are allocated memory in the Java heap, which is a common common sense. However, there is a special case, that is, if the escape analysis found that an object does not escape the method, then it may be optimized to allocate on the stack. This eliminates the need to allocate memory on the heap and does not require garbage collection.
For a detailed description of the allocation on the stack, you can refer to objects that are not necessarily allocated memory on the heap
Here, or simply to say, in fact, in the existing virtual machine, there is no real implementation stack on the allocation, in the object is not necessarily in the heap allocated memory in our example, the object is not allocated on the heap, in fact, is a scalar substitution implementation.
The escape analysis is immature.
The paper on Escape analysis was published in 1999, but not until JDK 1.6 was implemented, and the technology is not very mature today.
The fundamental reason is that there is no guarantee that the performance consumption of the escape analysis will be higher than his consumption. Although the escape analysis can be done by scalar substitution, stack allocation, and lock elimination. But the escape analysis itself also needs a series of complex analysis, which is actually a relatively time-consuming process.
An extreme example is that after escaping the analysis, it is found that no object is not escaping. The escape analysis process is wasted in vain.
Although this technique is not very mature, he is also a very important tool in the instant compiler optimization technology.
Deep understanding of escape analysis in Java