Implementation principle of Java NiO netty

Source: Internet
Author: User
Tags jboss
Analysis of netty implementation principles)

This article will mainly analyze the implementation of netty. Due to limited energy, I have not made the source code very fine.
R & D
Research. If the following content is incorrect or not rigorous, please correct and understand. For netty users, netty provides several typical examples with detailed APIs.
Doc and guide Doc. Some of the content and illustration in this article are also from netty's document. Thank you very much.

1. Overall Structure

First, let's move on to a beautiful overall netty structure. The following content mainly analyzes some of the core functions in the diagram,
This article does not analyze advanced optional functions such as container integration and security support.

2. Network Model

Netty is a typical reactor model structure. For details about reactor, refer to posa2. No conceptual explanation is provided here. Java
NIO builds the reactor mode, where Doug LEA (the amazing guy)
Io in Java
. Here is a classic example of the PPT.
Typical Implementation of the reactor mode:

1. This is the simplest single-threaded reactor model. The reactor thread is a multi-party operator responsible for Multi-Channel Separation of sockets, accept new connections, and dispatching requests to the processor chain.
This model is suitable for scenarios where the service processing components in the processor chain can be quickly completed. However, this single-threaded model cannot fully utilize multi-core resources, so it is not used in practice.

2. Compared with the previous model, this model uses multithreading (thread pool) in the processor chain and is also a common model for backend programs.

3,
Compared with the second model, the third model divides the reactor into two parts, and the mainreactor monitors the server.
Socket, accept new connection, and assign the established socket to subreactor. Subreactor is responsible for Multi-Channel Separation of connected sockets, read/write Networks
The service processing function is throttled to the worker thread pool. Generally, the number of subreactors can be the same as the number of CPUs.

After talking about the three forms of the reaco tr model, what is netty? In fact, I have another variant of the reactor model, which is the third form of change to remove the thread pool.
This is also netty.
The default NiO mode. In implementation, the boss class in netty acts as mainreactor, And the nioworker class acts as subreactor (default
The number of nioworker instances is runtime. getruntime (). availableprocessors ()). Processing new requests
Nioworker reads the received data to channelbuffer, and then triggers the channelhandler stream in channelpipeline.

Netty is event-driven and can control the execution flow through the channelhandler chain. Because the execution process of the channelhandler chain is
Synchronization in subreactor, so if the service processing handler takes a long time, it will seriously affect the number of supported concurrency. This model is suitable for application scenarios such as memcache,

It is not suitable for systems that need to operate databases or block interaction with other modules. The scalability of netty is very good. For channelhandler thread pooling
Add the built-in channelhandler implementation class-executionhandler implementation to channelpipeline.
Add a line of code. For the thread pool model required by executionhandler, netty provides two options: 1)
Memoryawarethreadpoolexecutor can control the upper limit of tasks to be processed in Executor (beyond the upper limit, subsequent tasks will be blocked
And can control the maximum number of tasks to be processed by a single channel. 2) orderedmemoryawarethreadpoolexecutor is
Memoryawarethreadpoolexecutor
It can also ensure the sequence of event streams processed in the same channel. This mainly controls the asynchronous processing of events.
The sequence of events that may occur in the processing mode, but it does not guarantee that all events in the same channel are executed in one thread (usually unnecessary ). Generally
Orderedmemoryawarethreadpoolexecutor is a good choice. Of course, you can also DIY it if necessary.

3. Buffer

The structure of the org. JBoss. netty. Buffer package interface and class is as follows:

The core interfaces of this package are channelbuffer and channelbufferfactory. The following is a brief introduction.

Netty uses channelbuffer to store and operate read/write network data. In addition to providing methods similar to bytebuffer, channelbuffer also
Provides some practical methods. For more information, see the API documentation. There are multiple implementation classes of channelbuffer. Here are the main ones:

1) heapchannelbuffer: This is the default channelbuffer used by netty to read network data. Here, heap is the meaning of Java heap.
Because the data read from the socketchannel must pass through bytebuffer, and the actual operation of bytebuffer is a byte array
The channelbuffer contains a byte array so that the conversion between bytebuffer and channelbuffer is in the zero copy mode. Based on Network Words
Heapchannelbuffer is divided into bigendianheapchannelbuffer and
Littleendianheapchannelbuffer, Which is bigendianheapchannelbuffer by default. Netty reading network

Heapchannelbuffer is used for data, and heapchannelbuffer is a fixed-size buffer.
The size is not suitable. netty will refer to the size required by the previous request when allocating the buffer.

2) dynamicchannelbuffer: Compared with heapchannelbuffer, dynamicchannelbuffer can dynamically adapt
Large and small. Dynamicchannelbuffer is usually used for data write operations in decodehandler when the data size is unknown.

3) bytebufferbackedchannelbuffer: This is directbuffer, which directly encapsulates the bytebuffer
Directbuffer.

For the buffer for reading and writing network data, there are two allocation policies: 1) generally, for simple consideration, a fixed size buffer is directly allocated. The disadvantage is that for some applications, this size limit has
No
Reasonable, and if the buffer limit is large, there will be a waste of memory. 2) for the disadvantage of fixed buffer size, dynamic buffer is introduced.
Buffer is equivalent to list in array.

There are also two common buffer storage policies (I actually know this): 1) in multithreading (thread pool)
In the model, each thread maintains its own read/write buffer, and clears the buffer before each request is processed (or after processing ), read/write operations of the request must be completed in this thread.
2) binding the buffer and socket is not related to the thread. Both methods aim to reuse the buffer.

Netty's buffer processing policy is: Read
When requesting data, netty first reads the data to the newly created heapchannelbuffer of a fixed size. When the heapchannelbuffer is full or no data is readable

The handler is called to process data. This usually triggers the User-Defined decodehandler first, because the handler object is

Therefore, the channelbuffer member can be set in decodehandler. When the parsed data packet finds that the data is incomplete, the processing process is terminated.
At the time of sending, the last data is parsed. In this process, the buffer in the decodehandler bound to the channelsocket is generally dynamic and reusable.
Buffer (dynamicchannelbuffer), while the buffer for reading data in channelsocket in nioworker is temporarily allocated
Heapchannelbuffer of a fixed size. This conversion process has a byte copy behavior.

For the creation of channelbuffer, netty uses the channelbufferfactory interface internally. The specific implementation is as follows:
Directchannelbufferfactory and heapchannelbufferfactory. For developers to create
Channelbuffer, you can use the factory method in the queue class channelbuffers.

4. Channel

The structure of the channel-related interfaces and classes is as follows:

The structure diagram shows that the main functions of the channel are as follows:

1) status information of the current channel, such as whether to enable or disable the channel.
2) Channel configuration information that can be obtained through channelconfig.
3) Io operations supported by the channel, such as read, write, bind, and connect.
4) obtain the channelpipeline that processes the channel. You can call the channel to perform Io operations related to the request.

In terms of channel implementation
For socket, The nioserversocketchannel and niosocketchannel in netty respectively encapsulate
Functions of serversocketchannel and socketchannel.

5. channelevent

As mentioned above, netty is event-driven and uses channelevent to determine the direction of the event stream. A channelevent is attached to a channel.
Channelpipeline is used for processing, and channelpipeline calls channelhandler for specific processing. And
Channelevent-related interfaces and class diagrams:

For the user, in the channelhandler implementation class, the messageevent inherited from channelevent will be used to call its
Getmessage () method to obtain the read channelbuffer or converted object.

6. channelpipeline

Netty
In event processing, event streams are controlled through channelpipeline, and events are handled by calling and registering a series of channelhandler. This is also a typical interception.
Device mode. The interfaces and class diagrams related to channelpipeline are as follows:

There are two types of event streams: upstream events and downstream events. In channelpipeline, its registered channelhandler
Either channelupstreamhandler or channeldownstreamhandler
But the event will only call the channelhandler that matches the stream during the channelpipeline transfer process. Filter chain in event stream
, Channelupstreamhandler or channeldownstreamhandler can terminate the process or call
Channelhandlercontext. sendupstream (channelevent) or
Channelhandlercontext. senddownstream (channelevent) transmits the event. The following figure shows how to process event streams:

As you can see, upstream events are processed by upstream handler from the bottom up one by one, downstream
Event is downstream
Handler processes handler one by one from top to bottom. The upper-lower relationship here is to add the sequential relationship of handler to channelpipeline. Simple logic
Solution: upstream event is the process of processing external requests, while downstream event is the process of processing external requests.

The request processing process on the server is usually decoding the request, business logic processing, and encoding response. The constructed channelpipeline is similar to the following code snippet:

ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("decoder", new MyProtocolDecoder());
pipeline.addLast("encoder", new MyProtocolEncoder());
pipeline.addLast("handler", new MyBusinessLogicHandler());

Myprotocoldecoder is of the channelupstreamhandler type, and myprotocolencoder is
Channeldownstreamhandler type, mybusinesslogichandler can be either
The channelupstreamhandler type can also be the channeldownstreamhandler type, depending on whether it is a server program or a client program and
Depends on the application.

Additionally, netty decouples abstraction and implementation. Like the org. JBoss. netty. Channel. Socket package,
Some interfaces related to socket processing are defined, while org. JBoss. netty. Channel. Socket. NiO,
Org. JBoss. netty. Channel. Socket. Oio and other packages are protocol-related implementations.

7. codec framework

For encoding and decoding of the request protocol, you can operate the byte data in channelbuffer according to the protocol format. On the other hand, netty has made a few practical
Codec helper. Here is a brief introduction.

1) framedecoder: The framedecoder internally maintains
Dynamicchannelbuffer is used to store the received data. It is like an abstract template. After writing the entire decoding process template, its subclass only needs to implement the decode function.
There are two direct implementation classes of framedecoder: (1) delimiterbasedframedecoder is based on the delimiter
(For example,/R/n) decoder, you can specify the delimiter In the constructor. (2) lengthfieldbasedframedecoder is a decoder based on the length field. If
This decoder can be used in a format similar to "content length" + content, "Fixed Header" + "content length" + dynamic content, the usage is explained in detail in the API Doc.
2) replayingdecoder: it is a variable subclass of framedecoder. It is non-blocking decoding relative to framedecoder. That is to say, use
Framedecoder should consider that the data read may be incomplete, and the use of replayingdecoder can assume that all data is read.
3) objectencoder and objectdecoder: encode and decode the serialized Java object.
4) httprequestencoder and httprequestdecoder: http protocol processing.

Here are two examples of using framedecoder and replayingdecoder:

public class IntegerHeaderFrameDecoder extends FrameDecoder {
protected Object decode(ChannelHandlerContext ctx, Channel channel,
ChannelBuffer buf) throws Exception {
if (buf.readableBytes() < 4) {
return null;
}
buf.markReaderIndex();
int length = buf.readInt();
if (buf.readableBytes() < length) {
buf.resetReaderIndex();
return null;
}
return buf.readBytes(length);
}
}

The decoding Fragments Using replayingdecoder are similar to the following, which is much simpler.

public class IntegerHeaderFrameDecoder2 extends ReplayingDecoder {
protected Object decode(ChannelHandlerContext ctx, Channel channel,
ChannelBuffer buf, VoidEnum state) throws Exception {
return buf.readBytes(buf.readInt());
}
}

For implementation, when the channelbuffer is called in the decode function of the replayingdecoder subclass to read data, if the read fails
Replayingdecoder will catch the error it throws, and then replayingdecoder will take over the control, waiting for the next read of the subsequent data
Continue decode.

8. Summary

Although this article is about to end, this article does not fully explain the netty implementation principles. When I plan to write this article, I also read the netty Code while summing up some
Write East
West, but intermittent, until the end of the little interest. I still love to do some source code analysis, but I have limited energy. If the source code analysis results cannot be organized, it cannot be meaningful.

This analysis has little value and interest. According to the netty code analysis, the netty code is very beautiful and the structure is clear at the level. However, this interface-oriented and abstract level is applicable to the code.
Tracing is a problem, because tracing code often encounters interfaces and abstract classes. You can only use factory classes and APIs
Doc, which repeatedly compares the correspondence between interfaces and implementation classes. Just like almost any good Java open-source project will use a series of excellent design patterns, you can also come up with a separate analysis article from the pattern.
Zhang Lai, although I do not have such an idea at present. After this article is completed, I am not very interested in looking at the netty code.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.