In Java, a class such as bytebuffer is often used when we want to perform more underlying operations on the data, usually in the form of Byte (byte) of the manipulation data. Bytebuffer provides two types of static instances:
public static Bytebuffer allocate (int. capacity) public static Bytebuffer allocatedirect (int capacity)
Why do you offer two different ways? This is related to the memory usage mechanism of java. The memory overhead generated by the first allocation method is in the JVM, and the second method of allocation generates overhead outside the JVM, which is the system-level memory allocation. When the Java program receives external data, it is first acquired by the system memory and then copied to the JVM memory by the system memory for use by the Java program. Therefore, in the second allocation method, you can eliminate the duplication of this step, the efficiency will be improved. However, the allocation of system-level memory is much more time-consuming than the allocation of JVM memory, so it is not always possible to allocatedirect the highest operational efficiency. Here is a comparison of the operating times for two allocation modes in different capacity situations:
As can be seen from the graph, when the operation of data volume is very small, the two modes of operation use time is basically the same, the first method may sometimes be faster, but when the amount of data is very large, the second way is far greater than the first allocation method.
Another note:
The Allocatedirect advantage is reflected when the Java program receives data from outside.
It has been tested that if you only manipulate byte arrays within the JVM, such as:
Bytebuffer.allocatedirect (...) Put (byte[] ...) The speed is far from bytebuffer.allocate (XXX). Put (byte[] ...) Fast.
2013-01-11
The allocate and allocatedirect2013-01-11 of Bytebuffer