[Java Performance] JIT compiler Introduction

Source: Internet
Author: User

[Java Performance] JIT compiler Introduction
Overview of JIT compiler using JIT (Just-In-Time)

  1. JIT compiler is the core of JVM. It has the greatest impact on program performance.
  2. The CPU can only execute assembly code or binary code. All programs must be translated into them before they can be executed by the CPU.
  3. C ++ and Fortran compile the program into CPU-related binary code through a static compiler.
  4. PHP and Perl are interpreted languages. They can run on any CPU by installing the correct interpreter. When the program is executed, the program code is interpreted and executed line by line.
  5. Advantages and disadvantages of compiled languages:
    • High Speed: during compilation, they can obtain more information about program structures and thus have the opportunity to optimize them.
    • Poor Applicability: The compiled binary code is often CPU-related, and may need to be compiled multiple times to adapt to multiple CPUs.
    • Advantages and disadvantages of interpreted language:
      • Strong adaptability: you only need to install the correct interpreter, and the program can be run on any CPU.
      • Slow speed: because the program needs to be translated by line, the speed slows down. In addition, the Execution Code cannot be optimized by the compiler due to the lack of compilation.
      • The Java method is to find an intermediate point between a compiled language and an explanatory language:
        • Java code will be compiled: compiled into Java bytecode, rather than binary code for a certain CPU.
        • The Java code is interpreted as follows: the Java bytecode needs to be interpreted and executed by the java program. In this case, the Java bytecode is translated into CPU-related binary code.
        • JIT compiler: compile Java bytecode into binary code related to the platform during program running. Because this compilation behavior occurs during the program running, the compiler is called the Just-In-Time compiler. HotSpot Compilation
          1. The HotSpot VM name also reflects how the JIT compiler works. When the VM starts to run a piece of code, it does not compile them immediately. In the program, there are always some "Hotspot" areas where the code will be repeatedly executed. The JIT compiler only compiles the code in these "Hotspot" areas. The reason for this is:
            • It is too cost-effective to compile the code that will only be run once. It is faster to directly explain and execute the Java bytecode.
            • The JVM can obtain information about the code when executing the code. The more times a piece of code is executed, the more familiar the JVM is with it, therefore, they can be optimized during compilation.
              • An example is as follows:b = obj.equals(otherObj)You need to query the type of the equals method, because the equals method may exist in any class on the inheritance tree. If this code is executed many times, the query operation will take a lot of time. When the JVM runs this code, it may find that the equals method is defined on the String type. When the JIT compiler compiles this code, the equals method of the String type will be called directly (of course, in the Code Compiled by JIT, we will also consider that when the obj reference changes, we need to query it again ). At this point, this code will be optimized in two aspects:
                • Conversion from explain execution to compile execution
                • Summary of skipping the method query phase (directly calling the String equals method)
                  1. Java combines the advantages of compiled and explanatory languages.
                  2. Java will compile the class file into a Java bytecode, and then the Java bytecode will be selectively compiled by the JIT compiler into binary code that can be directly run by the CPU.
                  3. After compiling Java bytecode into binary code, the performance will be greatly improved. Basic optimization: client or server
                    1. Generally, you only need to use the client version or the server version JIT compiler.
                    2. Client Version JIT compiler usage:-clientSpecified, server version usage:-server.
                    3. Which type of configuration is generally related to the hardware configuration. Of course, with the development of the hardware, there is no definite standard which hardware is suitable for which configuration.
                    4. Differences between the two JIT compilers:
                      • The Client Version compiles the code earlier than the Server version, which means that the code execution speed is faster in the early stage of program execution.
                      • The Server version will later compile the code to get more information about the program itself, so as to compile the code with a higher degree of optimization. Because programs running on the Server usually last for a long time.
                      • Principles of Tiered Compilation:
                        • Use the Client JIT compiler at the beginning of JVM startup
                        • Use the Server JIT compiler to re-compile after the HotSpot is formed
                        • In Java 8, the Tiered compiling method is used by default. Startup Optimization

                          A group of data:

                          Application -Client -Server -XX: + TieredCompilation Number of classes
                          HelloWorld 0.08 s 0.08 s 0.08 s Few
                          NetBeans 2.83 s 3.92 s 3.07 s ~ 10000
                          HelloWorld 51.5 s 54.0 s 52.0 s ~ 20000
                          Summary
                          1. When the program starts faster and better, it is better to use the Client Version JIT compiler.
                          2. As far as the startup speed is concerned, the performance of the Tiered compilation method is very similar to that of the Client-only method, because the Tiered compilation will also use the Client JIT compiler at startup. Batch Optimization

                            For batch processing tasks, the size of the task volume is the most important factor that determines the running time and which compilation policy is used:

                            Number of Tasks -Client -Server -XX: + TieredCompilation
                            1 0.142 s 0.176 s 0.165 s
                            10 0.211 s 0.348 s 0.226 s
                            100 0.454 s 0.674 s 0.472 s
                            1000 2.556 s 2.158 s 1.910 s
                            10000 23.78 s 14.03 s 13.56 s

                            Several conclusions can be found:

                            • When the number of tasks is small, the performance of the Client or Tiered method is similar. When the number of tasks is large, Tiered will achieve the best performance, because it uses both Client and Server compilers, at the beginning of the program running, the Client JIT compiler is used to obtain part of the compiled code. After the program "Hotspot" is gradually formed, use the Server JIT compiler to obtain highly optimized compiled code.
                            • The Tiered compilation method is always better than the Server JIT compiler alone.
                            • The Tiered compilation method achieves the same performance as the Client JIT compiler when the number of tasks is small. Summary
                              1. When a batch processing program needs to be executed, different policies are used for testing, the fastest way to use.
                              2. For batch processing programs, use the Tiered compilation method as the default option.
                                Optimization of long-running applications

                                For long-running applications, such as Servlet programs, throughput is generally used to test their performance. The following data sets indicate the impact of a typical  program on the throughput (OPS) when different "warm-up times" and different compilation policies are used (the execution time is 60 s):

                                Warm-up Period -Client -Server -XX: + TieredCompilation
                                0 s 15.87 23.72 24.23
                                60 s 16.00 23.73 24.26
                                300 s 16.85 24.42 24.43

                                Even if the "warm-up time" is 0 seconds, the compiler also has the opportunity to optimize it during the next period because the execution time is 60 seconds.

                                Several conclusions can be found from the above data:

                                • For typical  programs, the compiler quickly compiles and optimizes the code. When the "warm-up time" increases significantly, for example, from 60 seconds to 300 seconds, the OPS difference is not obvious.
                                • -The performance of server JIT compiler and Tiered compiler is significantly better than that of-client JIT compiler. Summary
                                  1. For long-running applications, the-server JIT compiler or Tiered compilation policy is always used. Java and JIT compiler versions

                                    The Client and Server versions of the JIT compiler are discussed above. However, there are three types of JIT compilers:

                                    • 32-bit Client (-client)
                                    • 32-bit Server (-server)
                                    • 64-bit Server (-d64)

                                      In a 32-bit JVM, a maximum of two JIT compilers can be used. In a 64-bit JVM, only one type can be used, that is,-d64. (Although there are actually two types, because in the Tiered compilation mode, both Client and Server JIT are used)

                                      About the choice of 32-bit or 64-bit JVM
                                      • If your OS is 32-bit, you must select a 32-bit JVM. If your OS is 64-bit, you can choose a 32-bit or 64-bit JVM.
                                      • If your computer memory is less than 3 GB, the 32-bit JVM has better performance. In this case, JVM references the memory to 32 bits (that is, declaring a variable pointing to the memory will occupy 32 bits), so the speed of operating these references will be faster.
                                      • The disadvantages of using the 32-bit version are as follows:
                                        • The available memory is less than 4 GB, which is less than 3 GB in some Windows OS and less than 3.5 GB in some earlier versions of Linux.
                                        • The operation speed of double and long variables is slower than that of 64-bit versions, because they cannot use the 64-bit registers of the CPU.
                                        • Generally, if your program requires less memory capacity and does not contain many long and double operations, it is faster to select 32-bit, compared with 64-bit, the performance usually increases by 5%-20%. Relationship between OS and compiler Parameters
                                          JVM version -Client -Server -D64
                                          Linux 32-bit 32-bit client compiler 32-bit server compiler Error
                                          Linux 64-bit 64-bit server compiler 64-bit server compiler 64-bit server compiler
                                          Mac OS X 64-bit server compiler 64-bit server compiler 64-bit server compiler
                                          Solaris 32-bit 32-bit client compiler 32-bit server compiler Error
                                          Solaris 64-bit 32-bit client compiler 32-bit server compiler 64-bit server compiler
                                          Windows 32-bit 32-bit client compiler 32-bit server compiler Error
                                          Windows 64-bit 64-bit server compiler 64-bit server compiler 64-bit server compiler
                                          Relationship between OS and default Compiler
                                          OS Default JIT Compiler
                                          Windows, 32-bit, any number of CPUs -Client
                                          Windows, 64-bit, any number of CPUs -Server
                                          Mac OS X, any number of CPUs -Server
                                          Linux/Solaris, 32-bit, 1 CPU -Client
                                          Linux/Solaris, 32-bit, 2 or more CPUs -Server
                                          Linux, 64-bit, any number of CPUs -Server
                                          Solaris, 32-bit/64-bit overlay, 1 CPU -Client
                                          Solaris, 32-bit/64-bit overlay, 2 or more CPUs -Server (32-bit mode)

                                          The relationship between OS and default JIT is based on the following two facts:

                                          • When a Java program runs on a Windows 32-bit computer, the startup speed of the program is usually the most important, because it is usually intended for end users.
                                          • When Java programs run on Unix/Linux systems, the program is often a long-running Server program, so the advantage of Server JIT compiler is more obvious. Summary
                                            1. The 32-bit and 64-bit JVMs support different JIT compilers.
                                            2. Support for JIT compilers varies with operating systems and architectures (32bit/64bit.
                                            3. Even if a JIT compiler is declared to be used, it is not necessarily the specified compiler, depending on the runtime platform. JIT Compiler Optimization (advanced tutorial)

                                              For most scenarios, it is sufficient to set only the JIT compiler:-client,-server or-XX: + TieredCompilation. For long-running applications, the Tiered compilation method is better. Even if you use it for short-running references, the performance is similar to that of using the Client compiler.

                                              However, in other cases, we still need to make some optimizations.

                                              Code Cache Optimization)

                                              After the JVM compiles the Code, the compiled Code is stored in the Code Cache in the form of assembly instructions. Obviously, this Cache area also has a size limit, when this area is filled up, the compiler cannot compile other Java bytecode.

                                              So when this area is set too small, it will affect program performance, because the compiler will not compile Java bytecode to get faster compilation instructions/binary code.

                                              This effect is more common when Tiered is used to compile the policy. Because this policy is running, the compiler behavior is similar to the Client compiler. At this time, a large number of Java bytecode will be compiled. If the Code Cache is set too small, the performance will not be fully improved.

                                              When the Code Cache area is filled up, the JVM will give a warning:

                                              Java HotSpot (TM) 64-Bit Server VM warning: CodeCache is full. compiler has been disabled. java HotSpot (TM) 64-Bit Server VM warning: Try increasing the code cache size using-XX: ReservedCodeCacheSize =

                                              Of course, you can also know the usage of Code Cache by viewing the compilation log.

                                              Relationship between the Java platform and the default Code Cache space:
                                              Java platform Default Space
                                              32-bit client, Java 8 32 MB
                                              32-bit server with tiered compilation, Java 8 240 MB
                                              64-bit server with tiered compilation, Java 8 240 MB
                                              32-bit client, Java 7 32 MB
                                              32-bit server, Java 7 32 MB
                                              64-bit server, Java 7 48 MB
                                              64-bit server with tiered compilation, Java 7 96 MB

                                              In Java 7, the default space in this region is often insufficient. Therefore, you need to increase the space needed in necessary scenarios. However, there is no good way to show whether an application can achieve the best performance only when the space of the Code Cache area is large. All you can do is keep trying to get the best result.

                                              The maximum space of Code Cache can be obtained through:-XX:ReservedCodeCacheSize=N. By default, N is the default space in the table above. The space management of Code Cache is similar to that of other memory spaces in JVM. It also provides a method to set the initial value:-XX:InitialCodeCacheSize=N. The initial value is related to the type of the compiler and the architecture of the processor. However, you need to set only the maximum space.

                                              Size of the Code cache space

                                              So is it better to set the larger the Code Cache space? Not all. After the configuration, even if it is not actually used, this space is also "scheduled" by JVM and cannot be used as another path.

                                              As mentioned earlier, if the JVM uses 32-bit memory, the maximum memory space is 4 GB, which includes the Java heap, JVM Code (including various Native libraries and thread stacks used), memory space that can be allocated by applications, and Code Cache. From this point of view, it is not set to the larger the better.

                                              You can use the jconsole tool to monitor the program running.

                                              Reserved Memory and allocated Memory (Reserved Memory and Allocation Memory)

                                              They are two very important concepts in JVM, and will appear in Code Cache, Java heap, and various JVM memory areas.

                                              Summary
                                              1. The Code cache affects the number of Java bytecode that can be compiled by JVM. It sets a maximum available space. When the space is fully occupied, the JIT compiler stops compilation.
                                              2. When using the Tiered compilation policy, the Code cache will soon be used up (especially Java 7). At this time, you can adjust it by setting the reserved space (that is, the maximum available space.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.