"Java Virtual Machine Discovery Road Series": JIT compiler

Source: Internet
Author: User

Guo Jia
Email: [Email protected]
Blog: http://blog.csdn.net/allenwells
Github:https://github.com/allenwell

Why do Java virtual opportunities exist in both the interpreter and compiler?

This is to take into account the start-up efficiency and implementation efficiency of two aspects. Java programs are initially interpreted by the interpreter, and when the virtual machine returns to a method or block of code run particularly frequently, the code will be marked as hot code, in order to provide hot code execution efficiency, at run time, the virtual machine will compile the code into the local platform-related machine code, and to perform various levels of optimization.

When the compiler does a radical optimization does not hold, it is better to load the new class after the type inheritance structure changes, there are rare traps can be reversed to optimize the return to the state of interpretation to continue execution.

The two mating relationships described above are as follows:

A compilation mode

There are two compilers built into the HotSpot JVM, client complier and server complier, and virtual machines are client mode by default, and we can also

    • -client: Forcing virtual machines to run client mode
    • -server: Forcing virtual machines to run server mode

In either client or server mode, the virtual machine runs in mixed mode with the interpreter and compiler, and can be

    • -xint: Forcing virtual machines to run in interpreting mode
    • -xcomp: Forcing virtual machines to run in compilation mode

The operating modes described above are as follows:

Two-tiered compilation

Why is there a layered compilation?

This is because the compiler compiles native code that requires program run time, it may take longer to compile a more optimized code lock, and to compile more optimized code, the interpreter may also collect performance monitoring information for the compiler, which can also affect the speed of interpreting execution. to find a balance between program startup responsiveness and operational efficiency, a layered compilation strategy is used.

The hierarchical policy is as follows:

    • Layer No. 0: program interpretation execution, the interpreter does not turn on the performance monitoring function, can trigger 1th layer compilation.
    • Layer 1th: C1 compiles, compiles bytecode into native code, performs simple and reliable optimizations, and, if necessary, joins the logic of performance monitoring.
    • Layer 2nd: C2 compile, compile bytecode into native code, enable some compile-time optimizations, and even perform some unreliable aggressive optimizations based on performance monitoring information.
Three compiled objects

The compiled object is the hot code that will be compiled and optimized, with the following two classes:

    • Methods that are called multiple times
    • The loop body that was executed multiple times
Four trigger conditions

This concept is used more than once in the description above, so what is counted several times?

This involves the concept of triggering conditions, judging whether a piece of code is a hotspot code, whether it is necessary to trigger the immediate compilation, this behavior becomes hotspot detection (spot dectection).

There are two ways of hot spot detection:

4.1 Sampling-based hotspot detection (sample Based hot spot dectection)

The virtual opportunity periodically checks the top of each thread, and if some method is found to appear frequently on top of the stack, then this method is the Hotspot method.

4.2 Counter-based hotspot detection (Counter Based hot spot dectection)

The virtual opportunity establishes counters for each method or block of code, counts the number of executions of the method, and considers him a hot method if the number of executions exceeds a certain threshold.

The Hotspot JVM uses the second method, based on the hot spot detection method of the counter, which prepares two types of counters for each method:

4.2.1 Method Call Counter

This threshold is 1500 times in client mode and 10000 in server mode, and this threshold can be set artificially by the parameter -xx:compilethreadhold .

If you do not make any settings, the method call count is not the absolute number of times the method is called, but the relative frequency of execution, that is, the number of times the method is invoked over a period of time, and if the number of calls to the method is still insufficient to commit to the immediate compiler compilation. The call counter of this method is reduced by half, and this process is called the heat decay of the method call counter (Counter Decay), which is called the Half-decay period (Counter Half life time) of this method statistic. You can also use the parameter -xx:-usecounterdecay to turn off heat attenuation.

The method call counter triggers an immediate compilation of the entire process as shown in:

4.2.2 Back Side counter

What is back side?
An instruction that jumps after a byte code encounters a control flow is called a back edge.

The back side counter is used to count the number of times the loop body code is executed in a method, and the threshold value of the back edge counter can be adjusted by the parameter -xx:onstackreplacepercentage .

    • Virtual virtual machine running in client mode, the cut off from value of the back edge counter is calculated as:
方法调用计数器闭值( CompileThreshold) xOSR比率(OnStackReplacePercentage) / 100

Where the Onslackreplacepercentage default value is 933, if the default value is taken. The cut off from value of the back edge counter of the client mode virtual machine is 13995.

    • When the virtual machine is running in servo mode, the ITM formula for the back edge counter cut off from value is:
方法调用计数器阂值(CompileThmshold) x (OSR比率(OnStackReplacePercentage) - 解释器监控比率(InterpreterProffePercentage) / 100

Where the Onstackreplacepementage default value is 140. The default value for INTERPRETERPMFILEPERCENMGC is 33.
If you all take the default value, the BF server mode virtual machine back edge counter has an aperture value of 10700.

The back-side counter triggers the process of instant compilation as shown in:

The back side counter differs from the method call counter in that the back edge counter has no heat attenuation, so this counter counts the absolute number of times the loop is executed.

Five compilation process

Under the default settings, either the immediate compilation request produced by the method call or the OSR compilation request, the virtual machine continues to be interpreted as it was before the code compiler finishes, and the compilation action continues in the compilation thread in the background. You can also use -xx:-backgroundcompilation to suppress background compilation, and once a JIT compilation is encountered, the execution thread submits the request to the virtual machine and waits until the compilation is complete before it starts executing the native code of the compiler output.

So what did the compiler do during the background compilation process?

The background compilation process for Server compiler and client compiler is not the same, let's take a look at it separately.

5.1 Client Compiler compilation process
    1. Phase one: A platform-independent front-end that creates a high-level median representation of byte-code (infermediate representaion), Hir uses a static, single-distribution representation of the code value, This makes it easier to implement some of the optimization actions that occur during and after the construction process, before the compiler completes a subset of the underlying optimizations on the bytecode, such as method inline, constant propagation, and so on.
    2. Second Stage: A platform-dependent backend generates low-level intermediate code representations from Hir (intermediate representation), which is preceded by other optimizations on hir, such as null-value check elimination, range-checking elimination, etc. So that hir can achieve a more efficient code representation.
    3. Phase three: Use the linear scan algorithm (Linear Scan register Allocation) on the platform-related backend to allocate registers on the Lir, and do a peek optimization (peephole) optimization on the Lir, then generate the machine code.

The entire process is as follows:

5.1 Server Compiler compilation process

Server compiler is a typical service-oriented application and a specially tuned compiler for server performance, which performs all the classic optimization actions, as follows:

    • Useless code Removal
    • Loop unfold
    • Loop-out expression
    • Eliminate common sub-expressions
    • Constant propagation
    • Basic Block reordering
    • Scope Check elimination
    • Null value Check elimination
    • Guardian Inline
    • Branch frequency prediction

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

"Java Virtual Machine Discovery Road Series": JIT compiler

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.