Coding principles and Java compiler

Source: Internet
Author: User

When writing code, we often mention two principles:

1. the method should be as short as possible, and the large method should be divided into small ones;

2. Do not reinvent the wheel.

When we emphasize these two principles, we often focus only on the Code concise, easy to maintain, and other factors that facilitate us. In fact, this can also greatly facilitate the Java compiler to optimize the code.

Java compiler optimization

The compilation process of Java applications is different from that of static compiling languages (such as C or C ++. Static compilers directly convert source code to machine code that can be directly executed on the target platform. Different hardware platforms require different compilers. The Java compiler converts the Java source code into a portable JVM bytecode. Unlike static compilers, javac is hardly optimized. In a static compilation language, optimization should be performed by the compiler. in Java, optimization is performed by the runtime during program execution.

Instant Compilation

For the implementation of the concept, the explanation is appropriate, but the early JVM was too slow. The next-generation JVM uses the JIT compiler to speed up execution. According to the strict definition, the JIT-based Virtual Machine converts all bytecode into machine code before execution, but it does this in an inert way: JIT compiles the code path only when it determines that a code path is to be executed (so it has the name "instant compilation "). This technology enables the program to start faster, because no lengthy compilation phase is required before execution starts.

JIT technology looks promising, but it has some shortcomings. JIT eliminates the burden of interpretation (at the cost of extra startup), but for several reasons, the code optimization level is still general. To avoid serious startup latency for Java applications, the JIT compiler must be very fast, which means it cannot spend a lot of time on optimization. Therefore, early JIT compilers were conservative in inliningassumption because they did not know which class to load later.

Technically speaking, jit-based virtual machines must compile bytecode before executing bytecode, however, the term JIT is usually used to represent any dynamic compilation process that converts bytecode into machine code-even those that can interpret bytecode.

Hotspot dynamic compilation

The hotspot execution process combines compilation, performance analysis, and dynamic compilation. Instead of converting all the bytecode to be executed into machine code, it first runs as an interpreter and only compiles the "popular" code-the most frequently executed code. When executed by hotspot, performance analysis data is collected to determine which code segment is executed frequently and is worth compiling. There are several performance advantages of compiling and executing the code most frequently: The time is not wasted on compiling the code that is not executed frequently; in this way, the compiler can spend more time optimizing popular code paths because it knows that the time spent on it is worth something. In addition, through delayed compilation, the compiler can access performance analysis data and use the data to improve optimization decisions, such as whether to inline a method call. To make things more complex, hotspot provides two compilers: the client compiler and the server compiler. The client compiler is used by default. When JVM is started, you can specify the-server switch and select the server compiler. The server compiler optimizes the maximum peak operating speed and is suitable for server applications that require long-term running. The optimization goal of the client compiler is to reduce the startup time and memory consumption of the application. The optimization complexity is much lower than that of the server compiler, so less Compilation Time is required.

The hotspot server compiler can execute various classes. It can execute many standard optimizations common in static compilers, such as hoisting, clearing common subexpressions, and unrolling) range detection clearing, dead code clearing, data stream analysis, and a variety of optimization techniques that are not practical in static compilation languages, such as aggregation inline of virtual method calls.

Continuous re-Compilation

Another interesting aspect of the hotspot technology is that compilation is not an all-or-nothing proposition. After interpreting the code path for a certain number of times, it will be re-compiled into a machine code. However, the JVM will continue to conduct performance analysis. If the code path is considered to be particularly popular, or the performance analysis data in the future may be considered as being optimized, it is also possible to re-compile the code with a higher level of optimization. During the execution of an application, JVM may recompile the same bytecode many times. To gain a deeper understanding of what the compiler has done, you can call the JVM using the-XX: + printcompilation flag, which will print a short message every time the compiler (client or server) runs.

On-stack replacement

One method is compiled at a time when hotspot starts version compilation. If the cumulative execution times of a method exceed the specified number of loop iterations (10,000 in the first version of hotspot), this method is considered as a hot method. The calculation method is as follows: associate a counter for each method. Each time a backward branch is executed, the counter is incremented once. However, after the method is compiled, the method call is not switched to the compiled version. You need to exit and re-enter the method before using the compiled version. The result is that, in some cases, compiled versions may never be used. For example, for computing-intensive programs, all calculations in such programs are completed in one call of methods. Heavyweight methods may be compiled, but the compiled code will never be used.

The latest versions of hotspot use the on-stack replacement (OSR) technology to support, switch from explain execution to compiled code (or switch from one version of compiled code to another ).

From the principle of Java compilation and execution optimization, we can see that the compiler will continuously optimize "Hot code blocks" and "Hot Methods" to improve performance. Let's review the two principles we often emphasize:

1. Try to write a small method. A small method means a single function with high reusability. It is naturally used in many places and easy to become a "hot method ".

2. Do not repeatedly invent the wheel and try to use the existing wheel. Everyone shares a "Wheel", which is naturally a "hot" wheel. The compiler will know that the wheel should be well optimized to make it more profitable.

Turn: http://java.chinaitlab.com/

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.