Reproduced in: http://crmky.spaces.live.com/Blog/cns! 8c989768db1a6b14! 458. Entry? Sa = 254330365: we all know the two types of bytebuffer, but what is the difference between them? In different environments, which type of bytebuffer will be used more efficiently? First, explain the differences between the two: Non-directbytebuffer memory is allocated to the stack and is directly collected by the Java Virtual Machine for garbage collection. You can think of it as a packaging class of a byte array, as shown in the following pseudo code: Heapbytebuffer extends bytebuffer { Byte [] content; Int position, limit, capacity; ...... } Directbytebuffer allocates a block of memory outside the Java Virtual Machine through JNI (even if the maximum heap memory of the Java Virtual Machine is specified through-xmx at runtime, it is still possible to instantiate the directbytebuffer that exceeds the size. The memory block is not directly collected by the Java virtual machine, but when the directbytebuffer packaging class is recycled To release the memory block. The following pseudo code is shown: Directbytebuffer extends bytebuffer { Long address; Int position, limit, capacity; Protected void finalize () throws throwable { // Release the memory block. This code is only used for demonstration. The real directbytebuffer is not released through finalize. Releaseaddress (); ...... } ...... } I believe most of my friends should understand the differences above. What are the other differences? Hey hey, let's take a little deeper look at Sun. NiO. Ch. ioutil. java. Most channel classes communicate with the outside world through this tool class, such as filechannel/socketchannel. I simply use the pseudo code to express the write method (the read method is similar, so I won't explain it much) Int write (bytebuffer SRC ,......){ If (SRC instanceof directbuffer) Return writefromnativebuffer (...); Bytebufferdirect = gettemporarydirectbuffer (SRC ); Writefromnativebuffer (direct ,......); Updateposition (SRC ); Releasetemporarydirectbuffer (direct ); } Yes, non-directbytebuffer is converted to directbytebuffer before sending and receiving, and related operations are performed. Finally, the position of the original bytebuffer is updated. What does this mean? Suppose we want to read a piece of data from the network and then send it out, the non-directbytebuffer process is as follows: Network --> temporary directbytebuffer --> Application Non-directbytebuffer --> temporary directbytebuffer --> Network The directbytebuffer process is as follows: Network --> application directbytebuffer --> Network As you can see, in addition to the time required for the construction and Analysis of the temporary directbytebuffer, at least two memory copies can be saved. So is directbuffer used in all circumstances? No. For most applications, the two memory copies are almost negligible, while the build and Analysis of directbuffer takes a relatively long time. In the implementation of JVM, some methods cache some temporary directbytebuffer, which means that if directbytebuffer is used, only two memory copies can be saved, rather than the construction and analysis time. In Sun's implementation, both the write (bytebuffer) and read (bytebuffer) Methods cache temporary directbytebuffer, while write (bytebuffer []) and read (bytebuffer []) A new temporary directbytebuffer is generated every time. Based on these differences, I will make the following suggestions: · If you do small and medium-sized applications (here, the application size is divided by the number and size of bytebuffer usage), and you don't care about the damn details, select non-directbytebuffer. If the performance does not change as expected after directbytebuffer is used, select non-directbytebuffer. If no directbytebuffer is available Pool, try not to use directbytebuffer unless you are sure that this bytebuffer exists for a long time and has frequent interaction with the outside world, you can use directbytebuffer. If you use non-directbytebuffer, use non-aggregation (gather) the write/read (bytebuffer) effect of may exceed the aggregated write/read (bytebuffer []), because the temporary directbytebuffer of the clustered write/read is not cached. Basically, using non-directbytebuffer is always correct! Because the overhead of memory copy is negligible for most applications. However, I am working on a large-scale network concurrency framework. Therefore, it is necessary to have a deep understanding of these details and adjust the buffer inheritance system based on these details (complain again, bytebuffer cannot be extended. It is a very, very confusing design) Note: "Even if you specify the maximum heap memory of the Java Virtual Machine through-xmx at run time, you may still instantiate the directbytebuffer that exceeds this size." In this example, you can use-XX: maxdirectmemorysize = <size> to specify the maximum number of memories that can be used by the directbytebuffer instance. If-XX: maxdirectmemorysize = 1024 is specified, the total memory size of all surviving directbytebuffer in the system cannot exceed 1024 bytes. |