Analysis on the realization principle of Netty

Source: Internet
Author: User
Tags jboss

Transfer from http://www.importnew.com/15656.html

Netty is a highly efficient Java NIO Development framework from JBoss, and for its use, refer to one of my other articles Netty use preliminary. This article will be the main analysis of netty implementation of things, due to limited energy, I did not have a very detailed source of research. Please also correct and understand if there is any error or uncertainty in the content below. For Netty users, Netty offers several typical example, with detailed API Doc and guide Doc, and some of the content and illustrations in this article are also from Netty's documentation.

1. Overall structure

First put on a beautiful netty overall structure diagram, the following content is mainly around the figure of some core functions to do analysis, but for such as container integration and security support, such as advanced optional features, this article will not be analyzed.

2. Network model

Netty is a typical reactor model structure, the detailed explanation about reactor, can refer to POSA2, here do not do conceptual explanation. Using Java NIO to build the reactor model, Doug Lea (the one who is so admired) gives a good explanation in the "scalable IO in Java". Here is a typical implementation of the reactor pattern that captures the classic illustrations in its PPT:

1, this is the simplest single reactor one-thread model. The reactor thread is a versatile, responsible for multiplexing sockets, accept new connections, and dispatch requests to the processor chain. This model is suitable for scenarios where the business processing components can be completed quickly in the processor chain. However, this single-threaded model does not take full advantage of multicore resources, so it does not actually use much.

2, compared to the previous model, the model in the processor chain part of the use of multithreading (thread pool), is also a common model of back-end programs.

3, the third model than the second model, is to divide the reactor into two parts, Mainreactor is responsible for monitoring the server socket,accept new connection, and the establishment of the socket assigned to Subreactor. Subreactor is responsible for multiplexing the connected sockets, reading and writing network data, to the business processing function, which is thrown to the worker thread pool to complete. Typically, the number of subreactor is equal to the number of CPUs.

After talking about the three forms of the REACOTR model, what kind of netty is it? In fact, I still have a variant of the reactor model, which is to remove the third variant of the thread pool, which is also the default mode for Netty NiO. On the implementation, the Boss class in Netty acts as the Mainreactor,nioworker class as the Subreactor (the number of default Nioworker is Runtime.getruntime (). Availableprocessors ( ))。 When processing a new request, Nioworker reads the received data into channelbuffer and then triggers the Channelhandler stream in Channelpipeline.

The Netty is event-driven and can be controlled by the Channelhandler chain to execute the flow direction. Because the execution of the Channelhandler chain is synchronous in subreactor, it can seriously affect the number of supported concurrency if the business process handler takes a long time. This model is suitable for applications such as memcache, but it is not appropriate for systems that need to manipulate databases or block interactions with other modules. The scalability of the Netty is very good, and like channelhandler thread pooling needs, you can add Netty built-in Channelhandler implementation class –executionhandler implementation in Channelpipeline, Only one line of code is added to the user. For the thread pool model required by Executionhandler, Netty provides two options: 1) memoryawarethreadpoolexecutor to control the upper limit of pending tasks in executor (when the upper limit is exceeded, subsequent tasks will be blocked) and can control the upper limit of the single channel pending task; 2) Orderedmemoryawarethreadpoolexecutor is a subclass of Memoryawarethreadpoolexecutor, It also guarantees the sequential nature of the event streams handled in the same channel, mainly the sequence of events that control the errors that may occur in the asynchronous processing mode of the event, but it does not guarantee that the events in the same channel are executed in one thread (often not necessary). Generally speaking, Orderedmemoryawarethreadpoolexecutor is a very good choice, of course, if necessary, you can also DIY one.

3. Buffer

The structure of the interface and class of the Org.jboss.netty.buffer package is as follows:

The core interface of the package is Channelbuffer and Channelbufferfactory, which are briefly described below.

Netty uses Channelbuffer to store and manipulate network data that is read and written. In addition to providing a similar approach to Bytebuffer, Channelbuffer provides some practical ways to refer to its API documentation. There are several implementation classes for Channelbuffer, here are some of the main ones:

1) Heapchannelbuffer: This is the default Netty read network data channelbuffer, where the heap is the Java heap meaning, because the reading Socketchannel data is to go through Bytebuffer, The Bytebuffer is actually a byte array, so the inside of the Channelbuffer contains a byte array, making the conversion between Bytebuffer and channelbuffer a 0-copy way. Depending on the network byte continuation, Heapchannelbuffer is divided into Bigendianheapchannelbuffer and Littleendianheapchannelbuffer, By default, Bigendianheapchannelbuffer is used. Netty when reading network data is the Heapchannelbuffer,heapchannelbuffer is a fixed size buffer, in order not to allocate the size of the buffer is not appropriate, Netty will refer to the size of the last request when allocating buffer.

2) Dynamicchannelbuffer: Dynamic adaptive size compared to Heapchannelbuffer,dynamicchannelbuffer. For write data operations in Decodehandler, Dynamicchannelbuffer is typically used in cases where the data size is unknown.

3) Bytebufferbackedchannelbuffer: This is directbuffer, directly encapsulated Bytebuffer Directbuffer.

There are two allocation strategies for buffer with read and write network data: 1) usually for simple consideration, allocating a fixed size buffer directly, the disadvantage is that this size limit is sometimes unreasonable for some applications, and if the upper limit of buffer is large there will be a waste of memory. 2) for a fixed size buffer disadvantage, the introduction of dynamic buffer, the dynamic buffer of the fixed buffer is equivalent to the list of the array.

Storage Policy for buffer there are also two common (in fact, I know that is limited to this): 1) in the multi-threaded (thread pool) model, each thread maintains its own read-write buffer, and each time the new request is processed to empty buffer (or empty after processing), Both the read and write operations of the request need to be done in that thread. 2) buffer and socket bindings are not thread-independent. The purpose of both methods is to reuse buffer.

Netty the buffer processing strategy is: Read the request data, Netty first read the data to the newly created fixed-size heapchannelbuffer, when the Heapchannelbuffer full or no data readable, call handler to process the data, This is usually the first trigger for user-defined Decodehandler, because handler objects are bound to Channelsocket, so Channelbuffer members can be set in Decodehandler. This process is terminated when parsing packet discovery data is incomplete, and then the last data continues to parse when the next read event is triggered. In this process, the buffer in the Decodehandler bound to Channelsocket is usually a dynamic, reusable buffer (Dynamicchannelbuffer), In Nioworker, the buffer of the data in the Channelsocket is temporarily assigned a fixed-size heapchannelbuffer, which has a byte copy behavior.

The creation of the Channelbuffer, Netty internal use is Channelbufferfactory interface, the specific implementation has directchannelbufferfactory and heapchannelbufferfactory. For developers to create Channelbuffer, use the factory method in the utility class Channelbuffers.

4. Channel

The interface and class diagram associated with the channel are as follows:

As can be seen from the structure diagram, the main functions provided by the channel are as follows:

1) status information for the current channel, such as Open or closed.
2) channel configuration information can be obtained by channelconfig.
3) The channel supports IO operations such as read, write, bind, connect, and so on.
4) to get the channelpipeline to process the channel, quantitation can call it to do and request the related IO operation.

In terms of channel implementations, the Nioserversocketchannel and Niosocketchannel in Netty are encapsulated in the usual NIO sockets, respectively, in the Java.nio contained in the Functions of Serversocketchannel and Socketchannel.

5, Channelevent

As mentioned earlier, Netty is event-driven, and it uses channelevent to determine the direction of the event flow. A channelevent is attached to the channel of the Channelpipeline to deal with, and by Channelpipeline call Channelhandler to do the specific processing. Here are the interfaces and class diagrams associated with channelevent:

For the consumer, a messageevent that inherits from Channelevent is used in the Channelhandler implementation class to invoke its GetMessage () method to obtain the read Channelbuffer or converted object.

6, Channelpipeline

Netty on event handling, it is a typical interceptor pattern to control the flow of events through channelpipeline and to handle events by invoking a sequence of channelhandler on them. Here are the interfaces and class diagrams associated with Channelpipeline:

There are two types of event flow, upstream events and downstream events. In Channelpipeline, the Channelhandler can be registered as either Channelupstreamhandler or Channeldownstreamhandler, However, the event will only call the Channelhandler of the matched stream during the Channelpipeline pass. In the filter chain of the event flow, Channelupstreamhandler or channeldownstreamhandler can either terminate the process or call Channelhandlercontext.sendupstream ( channelevent) or Channelhandlercontext.senddownstream (channelevent) to pass the event down. The following is an illustration of event flow processing:

From the visible, upstream event is upstream handler from the bottom up one by one processing, downstream event is downstream handler from top to bottom processing, The upper and lower relationship here is to add handler order relationship to Channelpipeline. Simply understood, the upstream event is the process of handling requests from outside, while the downstream event is the process of processing outgoing requests.

The process of processing requests by the server is usually a decoding request, a business logic processing, and a coded response, and the built Channelpipeline is similar to the following code snippet:

1234 channelpipeline pipeline = Channels.pipeline (); pipeline.addlast ( new myprotocoldecoder ()); pipeline.addlast ( new myprotocolencoder ()); pipeline.addlast ( new mybusinesslogichandler ());

Where Myprotocoldecoder is the Channelupstreamhandler type, Myprotocolencoder is the Channeldownstreamhandler type, Mybusinesslogichandler can be either a channelupstreamhandler type or a channeldownstreamhandler type, depending on whether it is a server-side program or a client program and the application needs.

To add, Netty is a good decoupling of abstractions and implementations. Like the Org.jboss.netty.channel.socket package, defines some interfaces related to socket processing, while Org.jboss.netty.channel.socket.nio, Org.jboss.netty.channel.socket.oio and other packages are implementations that are related to the protocol.

7. Codec framework

For the encoding and decoding of the request protocol, it is of course possible to manipulate the byte data in the Channelbuffer according to the protocol format. On the other hand, Netty also made a few very useful codec helper, here gives a simple introduction.

1) framedecoder:framedecoder internal maintenance of a Dynamicchannelbuffer member to store the received data, it is like an abstract template, the entire decoding process template is written, its subclasses only need to implement the Decode function. There are two direct implementations of the Framedecoder class: (1) Delimiterbasedframedecoder is a decoder based on a delimiter (such as \ r \ n), which can be specified in a constructor. (2) Lengthfieldbasedframedecoder is a decoder based on the length field. If the protocol format is similar to a format such as "content length" + content, "Fixed header" + "Content Length" + Dynamic content, you can use the decoder, which is explained in detail on the API doc.
2) Replayingdecoder: It is a variable seed class of Framedecoder, which is non-blocking decoding relative to Framedecoder. In other words, when using framedecoder, you need to consider that the data you read may be incomplete, while using Replayingdecoder you can assume that you have read all the data.
3) Objectencoder and Objectdecoder: Encode and decode the serialized Java object.
4) Httprequestencoder and httprequestdecoder:http protocol processing.

Here are two examples of using Framedecoder and Replayingdecoder:

123456789101112131415 public class IntegerHeaderFrameDecoder extends FrameDecoder {    protected Object decode(ChannelHandlerContext ctx, Channel channel,                ChannelBuffer buf) throws Exception {        if (buf.readableBytes() < 4) {            return null;        }        buf.markReaderIndex();        int length = buf.readInt();        if (buf.readableBytes() < length) {            buf.resetReaderIndex();            return null;        }        return buf.readBytes(length);    }}

Decoding fragments using Replayingdecoder, like the following, are relatively simplified.

123456 public class IntegerHeaderFrameDecoder2 extends ReplayingDecoder {    protected Object decode(ChannelHandlerContext ctx, Channel channel,            ChannelBuffer buf, VoidEnum state) throws Exception {        return buf.readBytes(buf.readInt());    }}

In terms of implementation, when Channelbuffer read data is called in the Decode function of the Replayingdecoder subclass, if the read fails, then Replayingdecoder will catch the error it throws. Replayingdecoder then takes control and waits for the next read to continue decode.

Analysis on the realization principle of Netty

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.