JVM Memory Management

Source: Internet
Author: User
Tags xms

JVM Memory Management
I. Physical memory and virtual memory
1. Physical memory
(1) RAM
The so-called physical memory is what we usually call RAM (random memory ).
(2) registers
In a computer, there is also a storage unit called register, which is used to store the intermediate results of the computing unit execution commands (such as floating point, integer, and so on. The register size determines the maximum value that can be used for a calculation.
(3) Address Bus
The address bus connects the processor and RAM, or the processor and register. The width of this address bus affects the index range of the physical address, because the bus width determines how many bits a processor can obtain from a register or memory at a time. It also determines the maximum address space that the processor can address. For example, the range of 32-bit address bus addressable is 0x0000 0000 ~ 0 xffff ffff. This range is 232 = 4 294 967 296 memory locations, each address will reference a byte, so the 32-bit bus width can have 4 GB of memory space. Generally, the address bus has the same number of digits as the register or RAM, because it is easier to transmit data.
(4) memory address
To run a program, you need to apply for a memory address from the operating system. The operating system manages the memory according to the process. Each process has an independent address space, and each process does not overlap with each other, that is, each process can only access its own memory space.
2. Virtual Memory
The process memory space is logically independent, which is guaranteed by the operating system, but the real physical memory is not necessarily used by only one process. With the complexity of the program, the physical memory cannot meet its needs. In this case, the appearance of virtual memory occurs.
The emergence of virtual memory allows multiple processes to share the physical memory. The sharing here is only space sharing, and logically they cannot access each other. The virtual memory not only allows the process to share physical memory and improve memory utilization, but also expands the address space of the memory, for example, a virtual memory address may be mapped to a block of physical memory, files, or other addressable storage.
When a process is not active, the operating system moves the data in the physical memory to a disk file, that is, swap partitions, and the truly efficient physical memory is reserved for active programs. In this case, when we wake up a program that has not been used for a long time, the disk will beep and a short pause will be confirmed, then, the operating system re-interacts the data on the disk to the physical memory. To avoid this situation, because the operating system frequently interacts with the data in the physical memory and disk data, the efficiency will be very low, especially on Linux servers.
We should pay attention to the activity of swap partitions in Linux. If swap partitions are frequently used, the system will be very slow, which may mean that the physical memory is seriously insufficient or some programs have not released the memory in time.

Ii. kernel space and user space
A computer usually has a certain size of memory, such as 4 GB address space, but the program cannot fully use these address spaces because these address spaces are divided into kernel space and user space. The program can only use the memory of the user space. The usage here refers to the memory space that the program can apply for, not the address space that the program actually accesses.
Kernel space mainly refers to the program logic used for program scheduling, virtual memory usage, or connection to hardware resources when the operating system is running. The division of kernel space and user space aims at the security and stability of the system, but also sacrifices some efficiency. System calls are initiated in the kernel space. For example, during network transmission, data transmitted over the network first receives data from the kernel space to the remote host, then, copy the kernel space to the user space for use by the user program. Each such system call will have two memory space switches. However, many other technologies have emerged to reduce the data replication mode from the kernel space to the user space. For example, the Linux system provides the sendfile file transmission mode.
In the current 32-bit Windows operating system, the default ratio of kernel space to user space is (2 GB kernel space, 2 GB user space ), in 32-bit Linux, the default ratio is (1 GB kernel space, 3 GB user space ).

Iii. memory usage for Java components
1. Java heap
The Java heap is the memory area used to store Java objects. The heap size is applied to the operating system once when JVM is started. The size is controlled using the-Xmx and-Xms options, xmx indicates the maximum heap size, and Xms indicates the initial size. Once the allocation is complete, the heap size will be fixed. You cannot apply for a new heap size from the operating system when the memory is insufficient. In addition, when the memory is idle, you cannot swap excess space to the operating system.
The management of memory space in the Java heap is controlled by JVM. The creation of an object is controlled by a Java application, but the space occupied by the object is released by the garbage collector that manages the heap memory. Depending on the garbage collection (GC) algorithm, the memory collection method and timing are also different.
2. threads
The entity for JVM to run the actual program is a thread. Of course, the thread requires memory space to store some necessary data. When each thread is created, the JVM creates a stack for it. The stack size varies depending on the JVM implementation, usually between kb and kb ~ Between 756KB.
The space occupied by threads is smaller than the heap space. However, if there are too many threads, the total memory usage of the thread Stack may be very large. At present, many applications allocate the number of created threads based on the number of CPU cores. If the number of running applications is more than the number of processors that can be used to process them, the efficiency is usually very low, and may lead to poor performance and higher memory usage.
3. Class and Class Loaders
Classes in Java and the class loaders for loading classes also require storage space. They are also stored in the heap in Sun JDK. This region is called permanent generation (PermGen ).
JVM loads classes as needed. JVM only loads classes explicitly used in your application to the memory. To check which classes are loaded by JVM, you can add-verbose: class to the startup parameter.
In theory, the more Java classes are used, the more memory occupied. Another situation is that the same class may be loaded repeatedly. Generally, the JVM only loads one class to the memory once, but if it is a self-implemented class loader, it will reload, if the PermGen area Cannot uninstall expired classes, memory leakage may occur in the PermGen area. Generally, a class can be detached and the following conditions must be met:
(1) There is no reference to the Java. lang. ClassLoader object indicating the class loader in the java heap.
(2) the Java heap does not reference any java. lang. Class objects that indicate classes loaded by the Class loader.
(3) On the Java stack, all objects of any class loaded by the class loader are no longer alive (referenced ).
Note that the three default class loaders Bootstrap ClassLoader, ExtClassLoader, and AppClassLoader created by JVM cannot meet these conditions. Therefore, any system (such as java. lang. string) or any application class loaded by the application Class Loader cannot be released at runtime.
4. NIO
Java added a new I/O class library (NIO) after version 1.4, introducing a new way to execute I/O Based on channels and buffers. NIO uses java. nio. byteBuffer. the allocateDirect () method allocates memory, which uses the local memory instead of the memory on the Java stack. Each memory allocation calls the OS: malloc () function of the operating system.
5. JNI
JNI technology allows local code (such as C language programs) to call Java methods, which is also known as native memory. In fact, Java runtime also relies on JNI code to implement class library functions, such as file operations, network I/O operations, or other system calls. Therefore, JNI also increases the local memory usage during Java runtime.

Iv. JVM Memory Structure
JVM divides the memory structure according to the storage structure of runtime data. When running Java programs, JVM divides them into several different formats of data and stores them in different regions, these Data are collectively called Runtime Data ). The runtime data includes the data of the Java program and the additional data required for JVM to run the Java program, such as the pointer to record the execution of the current program command (also known as the PC pointer. The Java runtime data is divided into six types in the Java Virtual Machine specification.
1. PC register
The PC register is strictly a data structure used to store the memory address of the currently normal program. At the same time, Java programs are executed in multiple threads, so it is impossible to execute them in a linear manner. When multiple threads are executed in parallel, the memory address of the program executing the interrupted thread must be saved, so that it can continue to be executed according to the instruction address at the time of interruption when it is resumed.
2. Java stack
Java stacks are always associated with threads. Every time a thread is created, JVM creates a corresponding Java stack for this thread, there will be multiple stack Frames (Frames) in this Java stack. These stack Frames are associated with each method, and each method is run to create a stack frame, each stack frame contains some internal variables (variables defined in the method), Operation stacks, and method return values.
Every time a method is executed, the stack frame will pop up the elements of the stack frame as the return value of the method, and clear the stack frame, the stack frame at the top of the Java stack is the active stack currently being executed, that is, the method currently being executed. The PC register will also point to this address. Only the local variables of the active stack frame can be used by the Operation stack. When another method is called in the stack frame, a new stack frame corresponding to this method is created. The newly created stack frame is placed at the top of the Java stack and becomes the current active stack frame. Currently, only the local variables of this stack frame can be used. When all the commands in this stack frame are executed, this stack frame is removed from the Java stack, the stack frame is changed to an active stack frame, and the returned value of the previous stack frame is changed to an operand in the Operation stack of this stack frame. If no value is returned for the previous stack frame, the Operation stack operations of the current stack frame remain unchanged.
Because the Java stack corresponds to the Java thread, this data is not shared by the thread, so we don't have to worry about its data consistency and there will be no synchronization lock issues.
3. Heap
Heap is the place where Java objects are stored. It is the core storage area for JVM to manage Java objects. heap is the most important concern of Java programmers, because it is the most closely related storage area between our applications and memory.
Every Java object stored in the heap will be a copy of the class of this object, and its value assignment includes all non-static attributes inherited from its parent class.
The heap is shared by all Java threads. Therefore, you must pay attention to the synchronization problem when accessing the heap. The consistency between methods and corresponding attributes must be ensured.
4. Method Area
The JVM method area is used to store class structure information. For example, when a class file is parsed into several parts that the JVM can recognize, these different parts are loaded into the JVM when the class is loaded, it is stored in different data structures, including the constant pool, domain, method data, method body, and constructor, the private methods, instance initialization, and interface initialization in the class are stored in this area.
The storage area of the method area also belongs to a part of the Java heap, that is, the permanent area in the Java heap, which can be shared by all threads, and its size can be set by parameters.
5. runtime frequent pool
In the JVM specification, the data structure of the Runtime Constant Pool is defined as follows: the Runtime Constant Pool represents the Constant table in each class file during Runtime. It includes several constants: Numeric constants, methods, or reference of the domain during the compilation period (resolved at runtime ). The function of the Runtime Constant Pool is similar to the symbol table in traditional programming languages, although it contains much richer data than the typical symbol table. Each Runtime Constant Pool is allocated in the JVM Method area. The Constant Pool of each Class or Interface is created when the JVM creates a class or Interface.
The runtime constant pool is a part of the method area, so its storage is also subject to the standard constraints of the method area. If the constant Pool cannot be allocated, an OutOfMemoryError will also be thrown.
6. Local method Stack
The local method stack is the space for JVM to run Native methods. It is similar to the Java stack described earlier. Because many Native methods are implemented in C language, therefore, it is usually called C stack. In addition to the conventional Native METHOD contained in our code, this bucket is used, when JVM uses JIT technology, it will recompile some Java methods into Native Code. These locally compiled Code usually uses this stack to track the execution status of the method.
There are no strict restrictions on this region in the JVM specification. It can be implemented freely by different JVM implementers, but it will throw OutOfMemoryError and StackOverflowError like other storage areas.

V. JVM Memory Allocation Policy
1. General memory allocation policies
In the operating system, there are three memory allocation policies:
(1) Static Memory Allocation
Static Memory Allocation refers to the ability to determine the storage space requirements of each data during runtime during program compilation. Therefore, a fixed memory space can be allocated to each data during compilation. This allocation policy does not allow the existence of variable data structures (such as variable arrays) in program code, or nested or recursive structures, because they both make compilation programs unable to calculate accurate storage space requirements.
(2) stack memory allocation
Stack-based memory allocation, also known as dynamic storage allocation, is implemented by a stack-like running stack. In the stack-based memory solution, the program's requirements for the data zone are completely unknown during compilation and can only be known at runtime, but it is required to enter a program module during running, you must know the size of the Data zone required by the program module to allocate memory for it. Stack-based memory allocation follows the advanced and later principles.
(3) heap memory allocation
In addition to determining the data storage space during compilation and knowing the storage space at the program entrance, another case is that the space size is known only when the program runs to the corresponding code. In this case, we need to heap this allocation policy.
Among these memory allocation policies, it is obvious that the heap allocation policy is the most free, but this allocation policy is a challenge for the operating system and memory management program. In addition, this dynamic memory allocation is executed only when the program is running, and its running efficiency is also relatively poor.
2. Memory Allocation in Java
JVM memory allocation is mainly based on two types: heap and stack.
(1) Stack
The Java stack allocation is bound with the thread. When we create a thread, it is clear that JVM will create a new Java stack for this thread, the call and return of a thread method correspond to the pressure stack and output stack of the Java stack. When a thread activates a Java method, the JVM will press a new frame into the thread's Java stack, which naturally becomes the current frame. During the execution of this method, this frame is used to save parameters, local variables, intermediate calculation processes, and other data.
The stack mainly stores some basic types of variable data (int, short, long, byte, float, double, boolean, char) and object handle (reference ). The access speed is faster than the heap speed. Second only to registers, stack data can be shared. The disadvantage is that the data size and lifetime in the stack must be fixed, which leads to a lack of flexibility.
(2) Heap
The Java heap is a runtime data zone. These objects are created using commands such as new, newarray, anewarray, and multianewarray. They do not need program code to be displayed and released. The heap is responsible for garbage collection. The advantage of the heap is that the memory size can be dynamically allocated, and the lifetime does not have to be told in advance because the heap dynamically allocates memory at runtime, the Java Garbage Collector automatically collects the unused data. But the disadvantage is that the memory needs to be dynamically allocated at runtime, And the access speed is slow.
Each Java application corresponds to only one JVM instance, and each instance corresponds to only one heap. All the class instances or arrays created by the application during running are stored in this heap and shared by all the threads of the application. Heap Memory Allocation in Java is automatically initialized, and the storage space of all objects is allocated in the heap, but the reference of this object is allocated in the stack, that is to say, when an object is created, the memory is allocated in both places. The memory allocated in the heap actually creates this object, the memory allocated in the stack is just a pointer (reference) pointing to the heap object.

Vi. JVM memory recovery policy
1. Static Memory Allocation and recovery
In Java, static memory allocation refers to the ability to determine the desired memory space when Java is compiled. When a program is loaded, the system allocates the memory to it at one time. These memories do not change during program execution until the execution ends. Local variables in Java classes and methods include Native data types (int, long, char, etc.) and object references are statically allocated memory.
2. Dynamic Memory Allocation and recovery
The so-called dynamic allocation means that the size of the storage space to be allocated is known during program execution, rather than being determined during compilation. Memory Allocation occurs when an object is created, and the recovery of memory is based on the premise that the object is no longer referenced. Dynamic Memory Allocation and collection are associated with some data types in Java, and their collection is handled by the garbage collector.
3. How to detect Spam
The garbage collector must be able to accomplish two things: one is to correctly detect the spam object and the other is to release the memory space occupied by the spam object. How to detect garbage is the key to the garbage collector. As long as an object is no longer referenced by other active objects, this object can be recycled.
4. Generation-based garbage collection Algorithm
The design idea of this algorithm is to group objects based on the length of life, and divide them into young and old generations. Newly created objects are divided into young generations, if the object remains alive after several recycles, divide the object into the old generation. The collection frequency of the old generation is not as frequent as that of the young generation, which reduces the number of objects scanned during each garbage collection, thus improving the garbage collection efficiency.
JVM divides the entire heap into the Young, Old, and Perm areas, respectively, and stores objects of different ages.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.