Heap memory is divided by life cycle, and partitioning can improve the efficiency of JVM garbage collection and better recycle in order to better distribute.
If memory cannot be allocated in the heap and the heap can no longer be expanded, an OutOfMemoryError exception will be thrown.
http://blog.csdn.net/qq_17612199/article/details/52316719
Heapbytebuffer and Directbytebuffer, in principle, the former can see that the allocation of buffer is in the heap area, in fact, the real flush to the remote time will first copy to get direct memory, To do the next step (consider the details will also go to the OS-level kernel area Direct memory), in fact, the fastest way to send static files is through OS-level send_file, only through the OS one core copy, but not back and forth, in the framework of NIO, Many frameworks use Directbytebuffer to operate, so that the allocated memory is no longer on the Java heap, but on the C heap, after performance testing, can get very fast network interaction, under a large number of network interaction, The general speed is several times faster than the Heapbytebuffer.
Direct memory is not part of the data area when the virtual machine is running, nor is it a memory area defined in the Java VM specification, but this part of memory is also used frequently and can cause outofmemoryerror anomalies to occur. So we put it here to explain it together.
The new Input/output class was added to JDK 1.4, introducing a channel-and buffer-based I/O approach that can be used to directly allocate out-of-heap memory using the native library, and then through a stored Java The Directbytebuffer object inside the heap operates as a reference to this memory. This can significantly improve performance in some scenarios because it avoids copying data back and forth in the Java heap and native heap.
Import Sun.nio.ch.DirectBuffer;
Import Java.nio.ByteBuffer;
public class Main {public
static void Main (string[] args) throws Interruptedexception {
System.out.println (" Hello world! ");
Bytebuffer BB = bytebuffer.allocatedirect (1024x768 * 1024x768);
Thread.Sleep (10000);
((Directbuffer) BB). Cleaner (). Clean ();
Thread.Sleep (10000);
}
}
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
The advantages and disadvantages of changing out-of-heap memory can be observed in Task Manager
Out-of-heap memory is actually memory that is not under the control of the JVM. There are several advantages compared to in-heap memory:
1 reduces garbage collection work because garbage collection pauses other work (it may be possible to use multithreading or time slices, and it doesn't feel at all)
2 speeds up replication. Because the heap is flush to the remote, it is copied to the direct memory (not the heap memory), then sent, and the outside memory is the equivalent of omitting the work.
And the curse of blessing depends, nature also has a bad side:
1 out-of-heap memory is hard to control, so it's hard to troubleshoot if memory leaks
2 out-of-heap memory is relatively unsuitable for storing very complex objects. Generally simple objects or flattening are more suitable.
The benefits are summarized as:
1. Reduce GC Time
2. Can be shared between processes, reducing replication between virtual machines