Address: https://anturis.com/blog/java-virtual-machine-the-essential-guide/
Introduced:
A Java Virtual machine (JVM) is the execution environment for Java applications. In a general sense, the JVM is an abstract computer, and typically a Java Virtual machine refers to a strict set of instructions and a comprehensive memory model to implement this specification specifically. It can also refer to a run-time instance of a software implementation. The main reference implementation of the JVM is hotspot.
The JVM specification ensures that any implementation can interpret the bytecode in exactly the same way. It can be implemented as a process, a standalone Java operating system, or a processor chip that executes byte codes directly. The most widely known JVM is the software implementation that runs as a popular platform (Windows,mac OS x,linux, Solaris, etc.).
The JVM's architecture provides detailed control over what the Java application does. It runs in a sandbox environment and ensures that the application does not have access to the local file system, and that the processes and networks do not have the appropriate permissions. If executed remotely, the code should be signed with a certificate.
In addition to explaining Java bytecode, most software implementations of the JVM include methods that have just been used in the real-time (JIT) compiler to generate machine code for frequent use. Machine code is the native language of the CPU and can perform much faster than interpreting bytecode.
You don't need to understand how a Java Virtual machine works to develop and run Java applications. However, if you have some understanding of multi-performance issues, you will avoid many problems.
Architecture:
The JVM specification defines the subsystems and their external behavior. The Java Virtual machine has the following major subsystems:
class Loader: responsible for reading Java source code and loading classes into data regions.
Execution Engine: Responsible for executing instructions from the data area.
The data area occupies the memory allocated by the JVM from the underlying operating system.
Class Loader
The JVM uses a different class loader that is divided into the following tiers:
(1) The Boot class loader is the parent loader for other class loaders. It loads the core Java library and is the only one that uses native code.
(2) The Extension class loader is a child of the boot ClassLoader. It loads the extension library.
(3) The system ClassLoader is a child of the extension class loader. It loads the class file that is found in Classpath, which is the application.
(4) A user-defined class loader is a child of the system ClassLoader or other user-defined ClassLoader.
When a class loader receives a request to load a class, it checks the cache to see if the class has been loaded, and then the delegate requests the parent. If the parent cannot load the class, then the child tries to load the class itself. The subclass loader can check the cache of the parent ClassLoader, but the parent cannot see the class that the child is loading. This design, because a child's class loader should not be allowed to load a parent class that has been mounted by it.
Execution Engine
The execution engine executes commands from the mount to the data region by byte-by-bit. To make the bytecode instruction into a readable machine code, the execution engine is used in two ways
interpretation: When encountering it, the execution engine will change each command to machine language
Just-in-time (JIT) Compilation: If a method is used frequently, the execution engine compiles it to native code and stores it in the cache. After this, all the commands associated with the method are not interpreted directly for execution.
Although JIT compilation time is more than the interpretation time, it is a method that may be called thousands of times only once. Running such a method saves a lot of time as native code compared to encountering one command at a time.
JIT compilation is not a requirement of the JVM specification, it is not the only way to improve the performance of the JVM. The specification defines only the native code in which the bytecode instruction is involved, and it is implemented to define how the execution engine performs the conversion.
Memory Model
The Java memory model is built on the concept of automatic memory management. When an object is no longer referenced by the application, the garbage collector discards it and frees up memory. This is different from the many other programming languages that you have to manually unload objects in memory.
The JVM allocates memory to the underlying operating system and divides it into the following areas.
Heap space: This is the shared memory area used to store objects, a garbage collection scanner
Method Area: This region was formerly known as the Persistence generation, where the loaded classes store resources that have recently been removed from the JVM and classes
NativeArea: This field is a basic type and reference variable
Dividing the heap into generations ensures efficient memory management because the garbage collector does not need to scan the entire heap. Most objects live in a very short time, and those that live longer may not need to be discarded until the application terminates
When a Java application creates an object, it is the Eden pool that is stored in the heap. Once full, a small garbage collection is triggered in the Eden Pond. First, the garbage collector flags dead objects (those that are not referenced by any number of applications), and increments the live object's time (represented by the number of times that the object has survived garbage collection), and then the garbage collector discards dead objects and moves live objects to the survivor pool of life, leaving the Eden Pond.
When a surviving object reaches a certain time, it is moved to the old generation heap: the lifetime of the pool. Eventually, the lifetime pool fills up and the main garbage collection is triggered to clean it up
When garbage collection occurs, all application threads are stopped, causing a pause. Small garbage collection is frequent, but is optimized to quickly remove dead objects, which is an important part of the younger generation. This is much slower than the main garbage collection, as they involve long-time objects. There are a variety of different garbage collectors that perform the primary garbage collection when some collectors may be faster in some cases.
The size of the heap is dynamic. The memory is assigned to the heap only when it is required when the heap fills up, the JVM re-allocates more memory until the maximum memory reallocation is reached, which also causes the application to simply stop.
Threads
The JVM runs in a single process, but it can execute multiple threads at the same time, each running its own method which is an important part of Java such as an Instant Messaging client application that runs at least two threads; one waiting for user input, one for checking the message received by the server Another example is, Server applications that perform different thread requests: sometimes each request can include running multiple threads at the same time
All threads share memory and provide additional resources to the JVM process. Each JVM process starts at the entry point of the main thread (main () method) Other threads are starting from it, the currently executing independent path thread can run concurrently on different processors, or it can share a processor thread scheduler that controls how the thread executes on a single processor.
Performance optimization
The performance of the JVM depends on how well it is configured to match the functionality of the application while memory uses the process of garbage collection and reallocation of memory automatically managed, but you have to control their frequency, in general, the more memory your application needs, the more memory you need to manage, and the less you will stop your app
If the garbage collection is happening more often than you think, you can use a larger heap, which requires a whole generation of heaps of time to fill up, and there will be less garbage collection happening, Configure the maximum heap size, using the XMX option, when you turn on the JVM by default, set the maximum heap size to any 1/4 of the operating system that is available for physical storage, or 1 GB (whichever is the smallest)
If the problem is to reallocate memory, you can set the initial heap size to be the same as the maximum value. This means that the JVM will always need to allocate more memory heaps However, you will also lose the dynamic heap size obtained by the adaptive Memory Optimizer heap will be the current fixed size, you start the application to configure the initial heap size, using the-XMS option when you start the JVM by default, The initial heap size is set to the available physical memory for any 1/64 of the operating system, or some reasonable minimum is different for the platform (whichever is maximum)
If you know what garbage collection (or large or small) may cause performance degradation, you can set the ratio between ages and generations without changing the overall heap size, for applications that will create more short-lived objects, Increase the size of the younger generation (that will leave memory for old objects) for applications that use a lot of surviving objects, increase the size of the older generation (by setting less memory for the younger generation) and use the following methods to control the memory size of the young generation
(1) When you start the JVM newratio option: Specify the ratio between young and old to use-XX for example, in order to make the older generation five times times more than the younger generation, specify XXX: newratio=5 By default, the ratio is set to 2 (older generation Jian, and the younger generation occupies?)
(2) Designate the younger generation to use when you start the JVM's-xmn option for the initial and maximum size of the older generation will be set any memory remains in the heap
(3) Specify the initial and maximum size of the younger generation respectively, using xx:newsize and XXX: When you start the Maxnewsize option in the JVM the size of the older generation will be set any memory left in the heap
Most applications (especially servers) require concurrent execution, some tasks are more important at a given moment, and others can be performed on different threads whenever the CPU is busy doing other tasks. For example, a server might have a low-priority thread that calculates data based on some data statistics and initiates a higher-priority thread to process incoming data while another high-priority thread service requests some of the computed results. Data can exist from the server to request data from many sources, and many clients. Each request will briefly stop the execution of the service request for the background compute thread, so you must monitor the number of running threads and ensure that there is sufficient CPU time to make the necessary compute threads
Each thread has a stack, a saved method call, a return address, and so on some memory allocation stack, and if there are too many threads, this can cause a memory overflow error. Even if you have enough heap memory allocated to the object, the application may not be able to start a new thread in this case, consider limiting the maximum size of the stack to the thread. To configure the thread stack size, use the-XSS option when you start the JVM by default, the thread stack size is set to either the value of KB or the KB, depending on the platform
Performance monitoring
Whether you are developing or running a Java application, it is very important to monitor the performance of the JVM to configure the JVM is not a one-time thing, especially if you are working on a server running on Java you have to constantly check both heap and non-heap memory, the number of threads that the application creates, And the allocation of the number of classes loaded into memory and the use of these core parameters
Using the Anturis console, you can set up the infrastructure to monitor the Java Virtual Machine Monitor component in any hardware component that monitors the JVM, such as a computer running a Tomcat Web server.
The Java Virtual Machine Monitor can measure the following metrics:
(1) Total memory usage (MB) is the amount of memory used in the JVM. If the JVM consumes all available memory, the indicator affects the overall performance of the underlying operating system
(2) Heap memory usage (MB) is the amount of memory allocated by the JVM using the running Java Application object. Unused objects are periodically purged by the garbage collector if this indicator grows, it may indicate that your application is not removing unused object references, or you need to properly configure the garbage collector
(3) Memory allocation method area and code cache for non-heap memory usage (MB). The method area is used to store reference-loaded classes. If these references do not properly remove the persistent generation pool, you can increase the time that each application is redeployed, resulting in a non-heap memory leak. It can also represent the creation of a thread leak.
(4) Total pool memory usage (MB) is the memory of the various memory pools allocated by the JVM (that is, total memory, not code buffers), which can give you a case where your application uses memory
(5) The thread (thread) is the number of active threads in the JVM. For example, each request to the Tomcat server is handled with a separate thread, so this indicator can give you the idea of the number of requests currently being processed, and whether it will affect the lower priority set for the thread to run the background task.
(6) Class (class ) loads the number of classes. If your application dynamically creates a large number of classes, this could be a serious memory leak source
Finished, write to this unconsciously feel their translation is too rotten, we still read the original!
Java Virtual machine:the Essential guide--Translations