Is your Java code friendly to JIT compilation?

Source: Internet
Author: User

The JIT compiler is one of the most efficient and important components of a Java Virtual machine (hereinafter referred to as the JVM). But many programs do not take full advantage of JIT's high-performance optimization capabilities, and many developers are not even aware of the extent to which their programs use JIT effectively.

In this article, we'll show you some simple ways to verify that your program is JIT-friendly. Here we are not going to overwrite details such as how the JIT compiler works. Just provide some simple basics of testing and methods to help your code be JIT-friendly and thus optimized.

The key point of JIT compilation is that the JVM automatically monitors the methods that are being executed by the interpreter. Once a method is considered to be called frequently, the method is flagged to compile the cost-to-machine instructions. The compilation of these frequently executed methods is done by a JVM thread in the background. The JVM executes the interpreted version of this method before the compilation is complete. Once the method compiles, the JVM replaces the version of the method's interpretation with the compiled version with the method's dispatch table.

Hotspot virtual machines have many JIT-compiled optimizations, but one of the most important optimization techniques is inline. During inline, the JIT compiler effectively extracts the method body of a method into its callers, thereby reducing the virtual method invocation. For example, take a look at the following code:

1  Public int Add (intint  y) {2     return x + y; 3 }4int result = Add (a, b);

When inline occurs, the code above becomes

1 int result = a + B;

The variables A and B above replace the parameters of the method, and the method body of the Add method has been copied to the caller's area. Using inline can bring many benefits to your program, such as

    • No additional performance damage is incurred

    • Reduce indirect references to pointers

    • No virtual method lookup is required for inline methods

In addition, by copying the implementation of the method into the caller, the JIT compiler processes more code, making subsequent optimizations and more inline possible.

Inline depends on the size of the method. By default, there are 35 bytecode or fewer methods for inline operation. For a method that is frequently called, the threshold value can reach 325 bytes. We can modify the maximum threshold by setting the-xx:maxinlinesize=# option by setting the Xx:freqinlinesize= #选项来修改频繁调用的方法的临界值. However, in the absence of proper analysis, we should not modify these configurations. Because a blind modification can have unpredictable effects on the performance of a program.

Because the federation has a significant improvement in the performance of the code, it is particularly important to have as many methods as possible to achieve inline conditions. Here we introduce a tool called Jarscan to help us detect how many methods in the program are inline friendly.

The Jarscan tool is part of the Jitwatch Open Source Tool suite that analyzes JIT compilation. Unlike the main tool for analyzing the JIT log at run time, Jarscan is a tool for static analysis of jar files. The output format of the tool is CSV, and the results contain information that exceeds the method of frequently calling method thresholds. Jitwatch and Jarscan are part of the ADOPTOPENJDK project, led by Chris Newland.

Before using Jarscan and getting analysis results, you need to download binary tools (Java 7 Tools, Java 8 tools) from the Adoptopenjdk Jenkins website.

The operation is simple, as shown below

?
1 ./jarScan.sh <jars to analyse>

More details on Jarscan can be found in the ADOPTOPENJDK wiki.

The reports generated above are useful for development teams, and depending on the report, they can find out if the program contains a critical path method that is too large to be JIT-compiled. The above operation relies on manual execution. For future automation, however, you can turn on the Java-xx:+printcompilation option. Turning on this option will generate the following log information:

?
12 371      java.lang.String::hashCode (67  bytes)124   2 s!  java.lang.ClassLoader::loadClass  (58bytes)

Where the first column represents the elapsed time, in milliseconds, from the start of the process to JIT compilation. The second column represents the compilation ID, which indicates that the method is being compiled (in a hotspot, a method can be optimized and re-optimized multiple times). The third column represents additional flag information, such as s on behalf of synchronized,! Represents an exception handling. The last two columns represent the method name being compiled and the byte size of the method.

For more details about the printcompilation output, Stephen Colebourne wrote a blog post detailing the specific meanings of the columns in the log results, which are interesting to read here.

The output of the printcompilation will provide information about the method that the runtime is compiling, and the output of the Jarscan tool can tell us which methods are not JIT-compiled. By combining both, we can clearly know which methods are compiled and which are not. In addition, the printcompilation option can be used on-line, as opening this option will hardly affect the performance of the JIT compiler.

However, there are two minor problems with printcompilation, which can sometimes seem less convenient:

    1. The result of the output does not contain the signature of the method, and it is difficult to differentiate if there are overloaded methods.

    2. The hotspot virtual machine is currently unable to output the results to a separate file, and can only be displayed in the form of standard output at this time.

The effect of the second problem above is that Printcompilation's logs are mixed with other common logs. For most server-side programs, we need a filtering process to filter printcompilation logs into a separate log. The simplest way to judge whether a method is JIT-friendly is to follow this simple step:

    1. Determine the method in the program that is located on the critical path to be processed.

    2. Check that these methods do not appear in the output of the Jarscan.

    3. Check that these methods do appear in the output of the printcompilation.

If a method exceeds the threshold of an inline, the most common approach in most cases is to split the important method into several small methods that can be inline, so that the modification usually gets better execution efficiency. However, for all performance optimizations, the execution efficiency before optimization requires a measurement record and needs to be compared with the optimized data before you can decide whether to optimize. The changes made for performance optimization should not be blind.

Almost all Java programs rely on a large number of libraries that provide critical functionality. Jarscan can help us to detect which libraries or frameworks have exceeded the inline threshold. To give a concrete example, here we examine the JVM's main runtime library Rt.jar files.

To make the results somewhat interesting, we compared Java 7 and Java 8, and looked at the changes in this library. We need to install Java 7 and Java8 JDK before we start. First, we run Jarscan to scan the respective Rt.jar files and get the report results for subsequent analysis:

?
1234 &NBSP;&NBSP; $ ./jarscan.sh /library/java/ Javavirtualmachines/jdk1. 7 .0_71.jdk/contents/home/jre/lib/rt.jar &NBSP;&NBSP; > large_jre_methods_7u71.txt &NBSP;&NBSP;&NBSP;&NBSP; $ ./jarscan.sh /library/java/javavirtualmachines/jdk1. 8 .0_25.jdk/contents/home/jre/lib/rt.jar &NBSP;&NBSP; > large_jre_methods_8u25.txt

After the above operation, we get two CSV files, one is the result of JDK 7u71 and the other is JDK 8u25. Then let's take a look at how the different versions of the inline situation change. First, one of the simplest ways to verify your validation is to see how many of the different versions of the JRE are not friendly to the JIT.

?
123 $ wc -l large_jre_methods_* 3684large_jre_methods_7u71.txt 3576large_jre_methods_8u25.txt

We can see that more than 100 inline unfriendly methods are less than Java 7,java 8. The following continues in-depth study to see some key package changes. To make it easier to understand how to do this, let's introduce the output of the Jarscan again. The output of the Jarscan is composed of 3 properties as follows:

?
1  "<package>","<method name and signature>",<num of bytes>

Knowing the format above, we can use some UNIX text processing tools to study the report results. For example, we would like to see which of the two versions of Java 7 and Java 8 have become inline-friendly under the Java.lang package:

?
12  $ cat large_jre_methods_7u71.txt large_jre_methods_8u25.txt | grep -i  ^\"java.lang | sort | uniq -c

The above statement uses the grep command to filter out the lines that begin with Java.lang in each report, which shows only the inline unfriendly method of the class that is located in package Java.lang. Sort | Uniq-c is an older Unix gadget that sorts the line information first (the same information will be aggregated together), and then the sorting data above is re-manipulated. In addition, this command counts the number of repetitions of the current line information, which is at the very beginning of each line of information. Let's take a look at the results of the above command:

?
12345678 $ cat large_jre_methods_7u71.txt large_jre_methods_8u25.txt | grep -i ^\"java.lang | sort | uniq -c2 "java.lang.CharacterData00","int getNumericValue(int)",8352"java.lang.CharacterData00","int toLowerCase(int)",13392"java.lang.CharacterData00","int toUpperCase(int)",1307// ... skipped output2"java.lang.invoke.DirectMethodHandle","private static java.lang.invoke.LambdaForm makePreparedLambdaForm(java.lang.invoke.MethodType,int)",6131"java.lang.invoke.InnerClassLambdaMetafactory","private java.lang.Class spinInnerClass()",497// ... more output ----

The first entry in the report, with 2 (which is the result of using uniq-c to calculate the same amount of information), shows that these methods do not change in the size of the bytecode in Java 7 and Java 8. While this is not a complete certainty that the bytecode of these methods has not changed, we can often be seen as unchanged. The number of repetitions of 1 is as follows:

A) The byte code of the method has changed.

b) These methods are new methods.

Let's take a look at the row data starting at 1

?
1234     1"java.lang.invoke.AbstractValidatingLambdaMetafactory","voidvalidateMetafactoryArgs()",864    1 "java.lang.invoke.InnerClassLambdaMetafactory","privatejava.lang.Class spinInnerClass()",497    1"java.lang.reflect.Executable","java.lang.String    sharedToGenericString(int,boolean)",329

The above three unfriendly methods are all from Java 8, so this is the case for the new method. The first two methods are related to the implementation of the LAMDA expression, and the third method is related to the inheritance-level adjustment in the reflection subsystem. In this case, the change is to introduce a common base class that the method and constructor can inherit in Java 8.

Finally, let's take a look at some of the amazing features of the JDK core library:

?
1234   $ grep -i ^\"java.lang.String large_jre_methods_8u25.txt  "java.lang.String","public java.lang.String[] split(java.lang.String,int)",326  "java.lang.String","public java.lang.String toLowerCase(java.util.Locale)",431  "java.lang.String","public java.lang.String toUpperCase(java.util.Locale)",439

From the above log we can see that even some of the key methods in Java 8 are java.lang.String in an unfriendly state. In particular, the two methods of toLowerCase and touppercase are too large to be inline, it really makes people feel strange. However, these two methods are more complex and larger than the inline-friendly thresholds because they are dealing with UTF-8 data instead of simple ASCII data, thus increasing the complexity and size of the method.

For programs that have high performance requirements and are determined to process only ASCII data, we usually need to implement a stringutils class of our own. The class contains some static methods to implement the functionality of the inline unfriendly methods described above, but these static methods are both compact and can reach inline requirements.

The improvements we discussed above are mostly based on static analysis. In addition, the use of powerful jitwatch tools can help us to better optimize. The Jitwatch tool needs to set the-xx:+logcompilation option to turn on log printing. Its printed log is in XML format, not printcompilation simple text output, and these logs are larger and typically reach hundreds of MB. It affects the programs that are running (primarily from the effect of log output by default), so this option is not intended for use in production environments online.

The combination of printcompilation and Jarscan is not difficult, but it provides a simple and practical step, especially if the development team intends to study the execution of an immediate compilation in its program. In most cases, in performance optimization, a quick analysis can help us accomplish some of the goals that are easy to achieve.

About the author

Ben Evans is Jclarity's Ceo,jclarity is a startup company dedicated to Java and JVM performance analysis research. In addition he is one of the head of the London Java Community and has a place in the Java Community Process Executive committee. His previous projects include Google IPO performance testing, the financial trading system, and the 90 's famous film sites.

Manuscripts: INFOQ.COM/CN

Is your Java code friendly to JIT compilation?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.