Previous Article Aicken introduced the. NET platform's garbage collection mechanism and its impact on performance. This article will continue to introduce another batch of dark horse-jit on the. NET platform.
JIT Mechanism Analysis
● Mechanism Analysis
Take C # as an example # Code Before running, it usually goes through two compilations. The first stage is the compilation of C # code to msil, and the second stage is the compilation of Il to local code. The compilation result of the first stage is to generate the hosting module, and the compilation result of the second stage is to generate local code for running. As you can see here, the msil generated in the first stage cannot be directly run. It must be noted that after JIT compiles il for the first time, it will modify the corresponding method's memory address entry. The CLR will directly access the corresponding memory address the next time it needs to execute this method, without passing through JIT.
Take the load () method as an example. If the load () method calls two methods of the same type:
Void load ()
{
A. A1 ("first ");
A. A1 ("second ");
}
Static Class
{
Public void A1 (string Str ){}
Public void A2 (string Str ){}
Public void A3 (string Str ){}
}
During running, the operating system loads the corresponding runtime framework based on various header information in the managed module. Load () is loaded. because it is the first time to load () JIT will detect all types referenced in load (), combine the metadata to traverse all the methods defined in these types for implementation, and use a special hashtable (only for understanding) store the entry addresses corresponding to these types of methods (this entry address is a pre-compiled proxy (prejitstub) before JIT is passed, which triggers JIT compilation). Based on these addresses, you can find the corresponding method implementation.
During initialization, each method in hashtable points to a JIT pre-compiled proxy instead of the corresponding memory entry address. This function is responsible for compiling the method into local code. Note that JIT has not been compiled, but a method table has been created!
Figure 2 method table, method description, pre-compiled proxy relationship
As shown in figure 2, the MS Core Engine refers to a DLL called mscoree, Microsoft. NET runtime execution engine, which is a bridge DLL. Together with mscorwks. dll, it mainly performs the following work:
1. SearchProgram Lists the corresponding types contained in the Set, and calls the metadata to traverse the contained methods.
2. Use metadata to obtain the Il of this method.
3. allocate memory.
4. Compile the local il code and save it in the memory allocated in step 1.
5. Change the method address in the type table (hashtable mentioned above) to the memory address allocated in step 1.
6. Jump to the local code for execution.
As the program running time increases, more and more methods are compiled into local code, and the number of JIT calls will decrease.
The following uses windbg to confirm the process of loading windbg. The following tests Source code Download http://files.cnblogs.com/isline/IsLine.JITTester.rar from here
Namespace jittester
{
Public partial class form1: Form
{
Public form1 ()
{
Initializecomponent ();
}
Private void form1_load (Object sender, eventargs E)
{
}
Private void go_click (Object sender, eventargs E)
{
New A (). A1 ();
Lb_msg.text = "the call is complete! ";
}
}
Class
{
Public void A1 (){}
Public C a2 = new C ();
}
Class B
{
Public void B1 (){}
Public void B2 (){}
}
Class C
{
Public void C1 (){}
Public void C2 (){}
}
}
Run the name2ee command to traverse all loaded modules, for example:
Figure 3 view type information
After you press enter, pay attention to the highlighted area information:
Figure 4 information of a type before JIT
The highlighted area displays" ", Which indicates that, even though running and program, a type is not JIT because it does not have an entry address when no button is clicked. This reflects the idea of real-time and on-demand compilation.
Similarly ,! Name2ee *! Jittester. B and! Name2ee *! The jittester. C command returns the same result.
Okay. Now proceed. Detach the debuggee process and return to the program and click "go ".
Figure 5 Click
Then re-append the process. At this time, the program has called the new A (). A1 () method and re-executed the command! Name2ee *! Jittester. A. Pay attention to the highlighted part.
Figure 6 Information of type A after JIT>
Compared with the information in figure 4, the method table address in Figure 6 has changed to the memory address after JIT, And the stub slot in Figure 2 will be replaced by a force redirect statement, the jump target is related to this address. This indicates that JIT only compiles the code once in most cases.
Run the same command to view type B:
Figure 7 B type information after JIT
This type has not been called, so it has not been JIT.
C type:
Figure 8 C-type information after JIT
the C type has JIT since the Class A is instantiated and related to the class C.
This is the entire JIT process of a type.
● performance impact analysis
Based on the above analysis, you can understand that the instant compilation process occurs during running. Will this affect the performance? In fact, the answer is yes, but this overhead is worth the money. As mentioned above, after JIT compiled the IL for the first time, the corresponding method's memory address portal will be modified (bypass ~~), The next time you need to execute this method, CLR will directly access the corresponding memory address without passing through JIT.
1. The performance overhead caused by JIT is not significant.
2. JIT follows two classic theories in Computer System Theory: Local principle and 8020 principle. The Locality Principle points out that the program always tends to use recently used data and commands, which include space and time. It can be derived from the Locality Principle, programs tend to use recently used data and commands, as well as the data and commands that are currently in use (by impressions, but without misinterpreting the original intention ); the 8020 principle points out that the system always spends 80% of its time executing the 20% code.
based on these two principles, JIT optimizes the code forward and backward in real time during runtime. This can only be done at runtime.
3. JIT only compiles the required code, not all, which saves unnecessary memory overhead.
4. JIT optimizes the Il code based on the runtime environment, that is, the same il code runs on different CPUs, and the Local Code Compiled by JIT is different, these different codes are optimized for your CPU.
5. JIT checks the running status of the Code, recompiles the special code, and continuously optimizes the Code during the running process.
In addition, you can use ngen.exe to create a local image for the hosted assembly. When you run this Assembly, the local images instead of JIT images will be automatically used. This sounds wonderful, but you must be prepared as follows:
1. When the Framework Version, CPU type, and operating system version change,. NET will restore the JIT mechanism.
2.ngen.exetool does not prevent unauthorized ilances. When ngen.exe is used, CLR still uses metadata and Il.
3. Ignore the Locality Principle (as mentioned in the previous section). The system loads the entire image file to the memory, and may relocate the file to correct the memory address reference.
4.ngen.exe-generated code cannot be optimized at run time, static resources cannot be accessed directly, and assembly cannot be shared between application domains.
Therefore, unless you are clear that the program performance is caused by the first compilation, do not manually generate local code.
JIT is excellent. It not only has compilation skills, but also generates code with low usage based on memory resources, saving resources. NET platform electronic products are very important. For systems running in B/S mode, if the usage rate is high, the performance loss caused by JIT can be basically ignored, because according to the local principle and 8020 principle, common modules are compiled, only those modules that are not commonly used will be compiled for the first time, and some time will be lost.
Unfinished
In the next article, aicken will introduce the. NET exception mechanism, string resident mechanism, and performance problems.
I'm aicken (Li Ming). Please stay tuned to my next article.
The ". Net Discovery series" is an article that explains the nature of the. NET platform. It now includes:
7 In the. NET Discovery series-an in-depth understanding of the. NET garbage collection mechanism (garbage collection) was released in the first second of the New Year.
. Net Discovery series 5-Me JIT (in simple terms)
6 of the. NET Discovery series-Me JIT (in brief, under. net jit)
. Net Discovery series 3-deep understanding of. Net garbage collection mechanism (I)
. Net Discovery series 4-deep understanding of. Net garbage collection mechanism (II)
One of the. NET Discovery series-string from entry to mastery (on)
. Net Discovery series II-string from entry to mastery (lower)