NETTY learning experience

Source: Internet
Author: User

In the past few days, I have been developing code on the java server and using the nio framework. Currently, mina and netty are better. After comparison, I chose netty as our development framework.

The most important document to learn about netty is the official document. I think this is also a common part of learning other frameworks. It may seem tired of learning English, in the process of reading, it will be better to combine the obtained Chinese data.

 

Netty principles:

Netty is event-driven and can control the execution flow through the ChannelHandler chain.

ChannelHandler

The execution process of the ChannelHandler chain is synchronized in subReactor. Therefore, if the service processing handler takes a long time, it will seriously affect the number of concurrent threads that can be supported.

Netty has excellent scalability. For ChannelHandler thread pooling, you can add the built-in ChannelHandler class-ExecutionHandler to ChannelPipeline to implement it. for users, you only need to add a line of code. For the thread pool model required by ExecutionHandler, Netty provides two options: 1) MemoryAwareThreadPoolExecutor can control the maximum number of tasks to be processed in Executor (beyond the upper limit, subsequent tasks will be blocked), and the maximum number of tasks to be processed in a single Channel can be controlled; 2) OrderedMemoryAwareThreadPoolExecutor is a subclass of MemoryAwareThreadPoolExecutor, it can also ensure the sequence of event streams processed in the same Channel, which mainly controls the sequence of events that may occur in asynchronous processing mode, however, it does not ensure that all events in the same Channel are executed in one thread (usually unnecessary ). Generally speaking, OrderedMemoryAwareThreadPoolExecutor is a good choice. Of course, you can also DIY it if necessary.

ChannelPipeline p = Channels. pipeline ();

/**
* Use the default ChannelPipeline Pipeline
* This means that the same DLPServerHandler instance will be shared by multiple Channel channels.
* This method is applicable to stateless member variables in DLPServerHandler and improves performance !, No verification.
*/

ServerBootstrap serverBootstrap;

ChannelPipeline pipeline = serverBootstrap. getPipeline ();


The common processing process of server programs is: Decoding request data, business logic processing, and encoding response. From the framework perspective, three interfaces can be provided to control and schedule the processing process. From a more general perspective, each step of the process is not specially processed, taking each step as a part of the filter chain is also Netty's practice. Netty implements the filter chain mode (ChannelPipeline) for the request processing process, and each filter implements the ChannelHandler interface. Two types of request event streams in Netty are also subdivided:

1) downstream event: the corresponding ChannelHandler sub-interface is ChannelDownstreamHandler. Downstream event refers to the process of executing ChannelDownstreamHandler in ChannelPipeline from start to end, which is equivalent to the process of sending data externally. Downstream events include: "write", "bind", "unbind", "connect", "disconnect", and "close ".

2) upstream event: the corresponding ChannelHandler sub-interface is ChannelUpstreamHandler. Upstream event processes events in the opposite direction than downstream event. This process is equivalent to receiving and processing external requests. Upstream events include: "channels", "exceptionCaught", "channelOpen", "channelClosed", "channelBound", "channelUnbound", "channelConnected", "writeComplete", "channels", "channelInterestChanged ".

Netty has a comment @ interface ChannelPipelineCoverage, which indicates whether the annotated ChannelHandler can be added to multiple ChannelPipeline. The optional values are "all" and "one "." "All" indicates that ChannelHandler is stateless and can be shared by multiple channelpipelines. "one" indicates that ChannelHandler only acts on a single ChannelPipeline. However, ChannelPipelineCoverage is only a comment and does not have a practical check function. Whether ChannelHandler is "all" or "one" depends on logical needs. For example, for handler decoding requests, because the decoded data may be incomplete, you need to wait for the next read event to continue parsing, therefore, the handler of the decoding request must be "one" (otherwise, the data shared by multiple channels becomes messy ). For example, hanlder in business logic processing is usually "all.
(1) In the upstream event stream, the decoded data returned by the decode is constructed into a MessageEvent. getMessage (), so in the context of handler, MessageEvent. getMessage () does not always return data of the ChannelBuffer type.
(2) MessageServerHandler's e. getChannel (). write (msg); action will trigger the DownstreamMessageEvent event, that is, call MessageEncoder to return the encoded data to the client.

After creating a ChannelHandler, You need to register them to ChannelPipeline, and ChannelPipeline corresponds to the Channel (whether it is a global Singleton or each Channel corresponds to a ChannelPipeline instance depends on the implementation ). The ChannelPipeline factory interface ChannelPipelineFactory can be implemented.

Core class: buffer

Http://static.netty.io/3.5/api/org/jboss/netty/buffer/package-summary.html


The core interfaces of this package are ChannelBuffer and ChannelBufferFactory. The following is a brief introduction.
Netty uses ChannelBuffer to store and operate read/write network data. In addition to methods similar to ByteBuffer, ChannelBuffer also provides some practical methods. For details, refer to its API documentation.
There are multiple implementation classes of ChannelBuffer. Here are the main ones:
1) HeapChannelBuffer: This is the default ChannelBuffer used by Netty to read network data. Here, Heap is the idea of Java Heap, because the data read from SocketChannel must pass through ByteBuffer, the actual operation of ByteBuffer is a byte array, so the ChannelBuffer contains a byte array, so that the conversion between ByteBuffer and ChannelBuffer is a zero copy method. HeapChannelBuffer is divided into BigEndianHeapChannelBuffer and LittleEndianHeapChannelBuffer based on the network word segments. BigEndianHeapChannelBuffer is used by default. Netty uses HeapChannelBuffer when reading network data. HeapChannelBuffer is a fixed-size buffer. In order not to assign a suitable Buffer size, netty will refer to the size required by the last request when allocating the Buffer.
2) DynamicChannelBuffer: Compared with HeapChannelBuffer, DynamicChannelBuffer can dynamically adapt to large and small sizes. DynamicChannelBuffer is usually used for data write operations in DecodeHandler when the data size is unknown.
3) ByteBufferBackedChannelBuffer: This is directBuffer, which directly encapsulates the directBuffer of ByteBuffer.
For the buffer for reading and writing network data, there are two allocation policies: 1) normally, a fixed size buffer is directly allocated for simple consideration. The disadvantage is that, for some applications, this size limit is not reasonable, and if the buffer limit is large, there will be a waste of memory. 2) for the disadvantage of fixed buffer size, dynamic buffer is introduced. Dynamic buffer is equivalent to List in Array.
There are also two common buffer storage policies (I only know this): 1) under the multi-thread (thread pool) model, each thread maintains its own read/write buffer, each time the buffer is cleared before a new request is processed (or cleared after processing), The read/write operations of the request must be completed in this thread. 2) binding the buffer and socket is not related to the thread. Both methods aim to reuse the buffer.
Netty's buffer processing policy is: when reading request data, Netty first reads the data to the newly created fixed-size HeapChannelBuffer. When the HeapChannelBuffer is full or there is no data readable, call handler to process data. This usually triggers the User-Defined DecodeHandler first. Because the handler object is bound to the ChannelSocket, you can set the ChannelBuffer member in the DecodeHandler, when the parsed data packet finds that the data is incomplete, the processing process is terminated. When the next read event is triggered, the last data will continue to be parsed. In this process, the Buffer in the DecodeHandler bound to the ChannelSocket is usually a dynamic reusable Buffer (DynamicChannelBuffer ), in NioWorker, the buffer for reading data in ChannelSocket is a temporary allocated HeapChannelBuffer of a fixed size. This conversion process involves a byte copy behavior.
For the creation of ChannelBuffer, Netty uses the ChannelBufferFactory interface internally. The specific implementation includes DirectChannelBufferFactory and HeapChannelBufferFactory. For developers to create ChannelBuffer, you can use the factory method in the handler class ChannelBuffers.
Channel
Http://static.netty.io/3.5/api/org/jboss/netty/channel/package-summary.html


The structure diagram shows that the main functions of the Channel are as follows:
1) status information of the current Channel, such as whether to enable or disable the Channel.
2) Channel configuration information that can be obtained through ChannelConfig.
3) IO operations supported by the Channel, such as read, write, bind, and connect.
4) obtain the ChannelPipeline that processes the Channel. You can call the Channel to perform IO operations related to the request.
In terms of Channel implementation, NioServerSocketChannel and NioSocketChannel in Netty encapsulate the ServerSocketChannel and SocketChannel functions in java. nio respectively.

Netty is event-driven. It uses ChannelEvent to determine the direction of the event stream. A ChannelEvent is processed by the Channel's ChannelPipeline, and ChannelPipeline calls ChannelHandler for specific processing.
For the user, the ChannelHandler implementation class uses the MessageEvent inherited from ChannelEvent and calls its getMessage () method to obtain the read ChannelBuffer or converted object.
In terms of event processing, Netty uses ChannelPipeline to control event streams and registers a series of ChannelHandler on it to process events. This is also a typical interception mode.

In the filter chain of the event stream, ChannelUpstreamHandler or ChannelDownstreamHandler can terminate the process, or pass the event by calling ChannelHandlerContext. sendUpstream (ChannelEvent) or ChannelHandlerContext. sendDownstream (ChannelEvent.


Codec framework
For encoding and decoding of the request protocol, you can operate the byte data in ChannelBuffer according to the protocol format. On the other hand, Netty also made several practical codec helper, which is a brief introduction here.
1) FrameDecoder: FrameDecoder maintains a DynamicChannelBuffer member internally to store the received data. It is like an abstract template, which has completed the entire decoding process template, its subclass only needs to implement the decode function. There are two direct implementation classes of FrameDecoder: (1) DelimiterBasedFrameDecoder is a decoder based on a delimiter (such as \ r \ n). You can specify a delimiter In the constructor. (2) LengthFieldBasedFrameDecoder is a decoder based on the length field. This decoder can be used if the proposed format is similar to "content length" + content, "Fixed Header" + "content length" + dynamic content, the usage is explained in detail in the api doc.
2) ReplayingDecoder: it is a variable subclass of FrameDecoder. It is non-blocking decoding relative to FrameDecoder. That is to say, when using FrameDecoder, you need to consider that the data read may be incomplete, and you can use ReplayingDecoder to assume that you have read all the data.
3) ObjectEncoder and ObjectDecoder: Java pairs for encoding and decoding serialization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.