"Turn" netty that thing. (ii) buffer in Netty

Source: Internet
Author: User

"original" Https://github.com/code4craft/netty-learning/blob/master/posts/ch2-buffer.mdin the previous article, we briefly introduced the principle and structure of Netty, and the following articles we began to analyze the various modules of netty in detail. The Netty structure at the bottom is the buffer mechanism, this part is relatively independent, we start with buffer. What:buffer two or three things

The buffer, in Chinese name, is "the area where data is temporarily stored in memory at the time of data transfer", as Wikipedia explains. It is a kind of synchronous asynchronous mechanism, which can solve the problem of the rate of data transmission and instability.

Based on this definition, we can know where I/O (especially I/O write) is involved, there is basically buffer. As far as Java is concerned, we are very familiar with the old i/o-- InputStream & OutputStream Series API, which is basically used internally to buffer. Java Course teacher has taught, must call OutputStream.flush() , in order to ensure data write effective!

In NiO, the concept of buffer is directly encapsulated as an object, the most common of which is probably bytebuffer. So the usage changed: write the data to Buffer,flip () and read the data. So, the concept of buffer more deeply rooted!

The buffer in Netty is no exception. The difference is that Netty's buffer is designed for network communication, so it's called channelbuffer (well, there's really no causal relationship ...). Let's talk about the Netty buffer. Of course, about Netty, we must talk about its so-called "zero-copy-capable" mechanism.

When & WHERE:TCP/IP protocol and Buffer

The TCP/IP protocol is the current mainstream network protocol. It is a multi-layer protocol, the bottom layer is the physical level, the top layer is the application tier (HTTP protocol, etc.), and Java application development, generally only contact TCP above, that is, the transport layer and the application layer of content. This is also the main application scenario for Netty.

TCP message has a relatively large feature, that is, it transmits the application layer of data items into bytes, and then according to their own transmission needs, select the appropriate number of bytes for transmission. What do you mean by "your own transmission needs"? First the TCP packet has a maximum length limit, so the data items that are too large must be disassembled. Second, because TCP and the lower protocol will attach some protocol header information, if the data items are too small, then probably the message is mostly worthless header information, so the transmission is not cost-effective. So there's a Nagle algorithm that collects a certain amount of small data and packages the transmission (this stuff is annoying in the HTTP protocol, and Netty can turn it off with SetOption ("Tcpnodelay", true).

So maybe it's a little bit of college, let's give an example:

When sending, we write this 3 times (' | ') Represents a two buffer partition):

   +-----+-----+-----+   | ABC | DEF | GHI |   +-----+-----+-----+

When it is received, it may turn out to be:

   +----+-------+---+---+   | AB | CDEFG | H | I |   +----+-------+---+---+

Pretty understood, huh? But what does that have to do with buffer? Don't worry, let's take a look at the following section.

The layered thought in why:buffer

Let's go back to the previous messageReceived method:

    PublicvoidMessagereceived (channelhandlercontext ctxMe Ssageevent e {//Send back the received message to The remote peer. transferredbytes. (((channelbuffere.< Span class= "NA" >getmessage ()). readablebytes ()); e. ().  (e. Getmessage              /span>                

MessageEvent.getMessage()the default return value here is one ChannelBuffer . We know that the "message" required in the business is actually a complete message at the application level, while the general buffer works in the transport layer and cannot correspond to "message". So what is this channelbuffer?

To an official chart, I think the answer is obvious:

As can be seen here, the TCP layer HTTP message is divided into two channelbuffer, these two buffer for our upper logic (HTTP processing) is meaningless. However, the two channelbuffer are combined to become a meaningful HTTP message, the message corresponding to the Channelbuffer, is what can be called "Message". Here is a word "Virtual buffer", which is called "Zero-copy-capable Byte buffer". Suddenly feel enlightened have no!

I would like to summarize here, if the largest difference between the channelbuffer of the NIO buffer and the Netty is that the former is only the buffer of the transmission, and the latter is actually the combination of the transmission buffer and the abstract logical buffer. the extension says that NIO is only a network transmission framework, and Netty is a network application framework that includes the network and application hierarchy.

Of course, in Netty, the default use ChannelBuffer of the expression "Message" is a more practical way, but MessageEvent.getMessage() can be stored in a pojo, this is a bit more abstract, which we ChannelPipeline will talk about later.

The Channelbuffer and realization in How:netty

Well, finally came the code implementation section. It's so much more verbose, because I think, about "zero-copy-capable Rich Byte Buffer", understanding why it is needed, is more important than understanding how it is implemented.

I think maybe a lot of friends like me, love the "follow-up" Reading code--Find a portal, and then go through the call to see it until you understand clearly. Very fortunate, ChannelBuffers (note that there is s!) Is such a "vine", it is the entrance of all Channelbuffer implementation class, it provides a lot of static tool methods to create different buffer, relying on "the way" to read code, roughly can be a variety of channelbuffer implementation class to touch. First, make a list of Channelbuffer related class diagrams.

In addition WrappedChannelBuffer , the series is also inherited from AbstractChannelBuffer , the diagram put to the back.

Readerindex and Writerindex in the Channelbuffer

began to think that Netty Channelbuffer is a package of NiO bytebuffer, in fact, it is not, it is the Bytebuffer re-realized again.

The most commonly used HeapChannelBuffer example, the bottom is also a byte[], unlike Bytebuffer, it can be read and write at the same time, without the need to use Flip () to read and write switch. Channelbuffer reads and writes the core code in AbstactChannelBuffer , here through the Readerindex and writerindex two integers, respectively points to the current read position and the current write position, and, Readerindex is always less than writerindex. Paste two pieces of code, so that you can see more clearly:

    PublicvoidWriteByte(IntValue){SetByte(Writerindex++,Value);}PublicByteReadByte(){If(Readerindex==Writerindex){ThrowNewIndexoutofboundsexception("Readable byte limit exceeded:"+ readerindex); } return getbyte (readerindex Span class= "O" >++);  public int writablebytes () Span class= "o" >{return capacity () - writerindex public int readablebytes () Span class= "o" >{return writerindex -readerindex< Span class= "O";               /span>                

I think this is a very natural way to understand something better than a single pointer and flip (). Abstactchannelbuffer also has two corresponding mark pointers and the markedReaderIndex markedWriterIndex same principle as NIO, which is not mentioned here.

byte order endianness and Heapchannelbuffer

When we created buffer, we noticed a method: what does it public static ChannelBuffer buffer(ByteOrder endianness, int capacity); ByteOrder mean?

Here is a very basic concept: byte order (byteorder/endianness). It specifies the extra byte of a number (int ah long or something), and how it is represented in memory. Big_endian (big endian) indicates that the number of integers is 12 stored as 0 0 0 12 four bytes, while Little_endian is the opposite. Maybe the programmers who are in C + + will be more familiar with this, and Javaer is a bit unfamiliar, because Java has managed to manage the memory. But in terms of network programming, depending on the protocol, different byte sequences may also be used. At present, most of the agreements still use big-endian, can refer to RFC1700.

Knowing this knowledge, it's easy for us to know why. BigEndianHeapChannelBuffer LittleEndianHeapChannelBuffer

Dynamicchannelbuffer

Dynamicchannelbuffer is a very convenient buffer, the reason is called dynamic because its length will be based on the length of the content to expand, you can use the same as the ArrayList, do not care about its capacity. The core of automatic expansion is the ensureWritableBytes method, the algorithm is simple: before writing capacity checks, capacity is not enough, a new capacity X2 buffer, with the ArrayList expansion is the same. Paste a piece of code (for code to understand, here I deleted some of the boundary check, only the main logic):

    PublicvoidWriteByte(IntValue){Ensurewritablebytes(1);Super.WriteByte(Value);}PublicvoidEnsurewritablebytes(IntMinwritablebytes){If(Minwritablebytes<=Writablebytes()){Return;}IntNewcapacity=Capacity();IntMinnewcapacity=Writerindex()+Minwritablebytes;While(Newcapacity< minnewcapacity) { Newcapacity <<= 1} channelbuffer newbuffer =  Factory ().  (order (), newcapacity) ; newbuffer. (buffer0writerindex ()); buffer = newbuffer              /span>                
Compositechannelbuffer

CompositeChannelBufferis a combination of multiple channelbuffer, can be considered as a whole to read and write. Here's a tip: Compositechannelbuffer does not open up new memory and copies all Channelbuffer content directly, but instead directly saves all channelbuffer references and reads and writes in the sub-channelbuffer. Thus realizing the "zero-copy-capable". Let's take a short version of the Code:

    PublicClassCompositechannelbuffer{Components Save All internal channelbufferPrivateChannelbuffer[]Components;Indices records the starting position of each of the components in the entire CompositechannelbufferPrivateInt[]Indices;Cache the last read and write ComponentIDPrivateIntLastaccessedcomponentid;PublicByteGetByte(IntIndex){The index of the position recorded in the indices to the corresponding number of sub-bufferIntComponentID=ComponentID(Index);ReturnComponents[ComponentID].GetByte(Index-Indices[ComponentID public void setbyte (int indexint value) {int componentid = componentId< Span class= "O" > (indexcomponents[componentid]. Setbyte (index -indices[ componentidvalue); } }             /span>                

Finding the ComponentID algorithm is not introduced again, we can not be too difficult to achieve themselves. It is worth mentioning that, based on the features of Channelbuffer continuous reading and writing, sequential lookups (rather than binary lookups) are used and are used for lastAccessedComponentId caching.

Bytebufferbackedchannelbuffer

The front said Channelbuffer is the realization of their own, in fact, only half of the said. ByteBufferBackedChannelBufferis the class that encapsulates the NIO bytebuffer, which is used to implement buffer for out-of-heap memory (using NiO DirectByteBuffer ). Of course, it can also put other Bytebuffer implementation classes. Code implementation will not say, there is nothing to say.

Wrappedchannelbuffer

WrappedChannelBufferAre a few classes that have been packaged with Channelbuffer to complete a particular function. Code is not affixed, the implementation is relatively simple, a list of functions.

Class name Entrance Function
Slicedchannelbuffer Channelbuffer.slice ()
Channelbuffer.slice (Int,int)
Part of a channelbuffer.
Truncatedchannelbuffer Channelbuffer.slice ()
Channelbuffer.slice (Int,int)
A part of a channelbuffer can be understood as the fact that the position is 0 Slicedchannelbuffer
Duplicatedchannelbuffer Channelbuffer.duplicate () Using the same storage as a channelbuffer, the difference is that it has its own index
Readonlychannelbuffer Channelbuffers.unmodifiablebuffer (Channelbuffer) Read only, you know.

As you can see, in terms of implementation, Netty 3.7 of the buffer related content is relatively simple, there is not too much to the brain cells of the place.

and Netty after 4.0 is different. 4.0,channelbuffer renamed Bytebuf, became a separate project buffer, and in order to optimize performance, added to the bufferpool and other mechanisms, has become more complex (essentially not how to change). Performance optimization is a very complicated matter, when studying the source code, it is recommended to avoid these things, unless you have a unique understanding of the algorithm. For example, Netty4.0 in order to optimize the map to the Java 8 in 6000 rows of ConcurrentHashMapV8, you feel a bit ...

Resources:

      • TCP/IP protocol Http://zh.wikipedia.org/zh-cn/TCP/IP%E5%8D%8F%E8%AE%AE
      • Data_buffer Http://en.wikipedia.org/wiki/Data_buffer
      • Endianness http://en.wikipedia.org/wiki/Endianness

"Turn" netty that thing. (ii) buffer in Netty

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.