Analysis of Netty principle

Source: Internet
Author: User

1. Netty Introduction

Netty is a high-performance, asynchronous event-driven NIO framework based on the API implementations provided by Java NIO. It provides support for TCP, UDP, and file transfer, and as an asynchronous NIO framework, all IO operations of Netty are asynchronous and non-blocking, and through the Future-listener mechanism, the user can obtain the IO operation result conveniently or through the notification mechanism. As the most popular NIO framework, Netty has been widely used in the field of Internet, big Data distributed computing, game industry and communication industry, and some industry-famous open source components are also built on the Netty NIO framework.

2. Netty Threading Model

In Java NIO, selector provides the basis for the reactor pattern, and Netty combines selector and reactor patterns to design an efficient threading model. First look at the reactor mode:

2.1 Reactor Mode

Wikipedia explains the reactor model: "The reactor design pattern is a event handling pattern for handling service requests delivered Co Ncurrently by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them synchronously to associated request Handl ERs. ". First, the reactor mode is event-driven, with one or more concurrent input sources, one server handler and multiple request handlers, the service Handler will synchronously distribute the incoming request multiplexing to the appropriate request Handler. Can be as follows:

Structurally similar to producer and consumer models, where one or more producers put events into a queue, one or more consumers proactively poll events from this queue, while the reactor pattern does not have a queue to buffer, whenever an event is entered into the service After handler, the service handler will proactively distribute it to the corresponding request handler according to the different evnent types.

Implementation of 2.2 Reator model

With regard to the Java NIO construction reator model, Doug Lea gives a good explanation of the implementation of the Reator mode in the scalable IO in Java, where the PPT is intercepted.

1. The first model of implementation is as follows:

This is the simplest reactor single-threaded model, because the reactor mode uses asynchronous nonblocking io, all IO operations are not blocked, and a thread can theoretically handle all IO operations independently. At this point the reactor thread is a generalist, responsible for the multiplexing of sockets, accept new connections, and distribute requests to the processing chain.

For some small-capacity scenarios, you can use a single-threaded model. However, for high load, the application of large concurrency is not suitable, the main reasons are as follows:

    1. When a NIO thread processes hundreds or thousands of links at the same time, the performance cannot be supported, even if the NIO thread's CPU load reaches 100%, it cannot fully process the message
    2. When the NIO thread is overloaded, the processing speed slows down, causing a large number of client connections to time out, which tends to be re-sent after the timeout, aggravating the load on the NIO thread.
    3. Low reliability, an unexpectedly dead loop of one thread can cause the entire communication system to be unavailable

In order to solve these problems, reactor multithreading model has appeared.

2.Reactor Multithreaded Model:

Compared to the previous model, the model uses multithreading (thread pool) in the processing chain.

In most scenarios, the model can meet performance requirements. However, in some special scenarios, such as the server will be the client's handshake message Security authentication. In such scenarios, a single acceptor thread may have a problem with insufficient performance. In order to solve these problems, a third model of reactor threading was produced.

3.Reactor Master-Slave model

Compared with the second model, this model divides the reactor into two parts, Mainreactor is responsible for listening to the server socket,accept new connection, and assigns the established socket to the subreactor. Subreactor is responsible for multiplexing the connected sockets, reading and writing network data, to the business processing function, which is thrown to the worker thread pool to complete. Typically, the number of subreactor is equal to the number of CPUs.

2.3 Netty Model

2.2 said the reactor three models, then Netty is what kind of? In fact, Netty's threading model is a variant of the reactor model, which is to remove the third variant of the thread pool, which is also the default mode for Netty NiO. The participants in reactor mode in Netty have the following components:

    1. Selector
    2. Eventloopgroup/eventloop
    3. Channelpipeline

Selector is the Selectablechannel multiplexer provided in NiO, which acts as a demultiplexer role, which is not covered here, and is described in the following two other functions and their role in reactor mode of Netty.

3.eventloopgroup/eventloop

When the system is running, there is additional performance loss if the thread context switches frequently. Multi-threaded concurrent execution of a business process, business developers also need to be constantly on the thread security vigilance, which data can be modified concurrently, how to protect? This not only reduces the development efficiency, but also brings additional performance loss.

In order to solve the above problems, Netty adopts the serialization design concept, from the reading of messages, encoding and subsequent handler execution, always by the IO thread eventloop responsible, which is unexpectedly the entire process does not switch thread context, the data will not face the risk of concurrent modification. This also explains why the Netty threading model removes the reactor master-slave model thread pool.

Eventloopgroup is a set of EventLoop abstraction, Eventloopgroup provides next interface, you can always a set of eventloop in accordance with a certain rule to obtain one of the EventLoop to handle the task, What you need to know about Eventloopgroup here is that in Netty, we need to Bosseventloopgroup and Workereventloopgroup two eventloopgroup to work in Netty server programming. Typically a service port is a serversocketchannel corresponding to a selector and a eventloop thread, which means that the number of threads in the Bosseventloopgroup parameter is 1. The Bosseventloop is responsible for receiving the client connection and handing the Socketchannel to Workereventloopgroup for IO processing.

The implementation of EventLoop acts as a distribution (Dispatcher) role in reactor mode.

4.ChannelPipeline

Channelpipeline is actually the role of the request processor in reactor mode.

The default implementation of Channelpipeline is that Defaultchannelpipeline,defaultchannelpipeline itself maintains a channelhandler of tail and head that is invisible to the user, They are located at the head and tail of the linked list queue, respectively. Tail in the upper part, while head is in the direction near the network layer. There are two important interfaces in Netty about Channelhandler, Channelinboundhandler and Channeloutboundhandler. Inbound can be understood as the flow of network data from the outside to the inside of the system, while outbound can be understood as network data flows from the system to the outside of the system. The user-implemented Channelhandler can implement one or more of these interfaces as needed, put them into the list queue in pipeline, Channelpipeline will find the corresponding handler to handle according to the different IO event types. At the same time, the linked list queue is a variant of the chain of responsibility, and the event is handled by all the handler that satisfy the event from top to bottom or from below.

Channelinboundhandler the packet from the client to the server processing, is generally used to perform half-packet/sticky packet, decoding, reading data, business processing, etc. channeloutboundhandler the message sent from the server to the client is processed, Typically used for encoding, sending messages to the client.

is a description of the Channelpipeline execution process:

More information about Pipeline can be found in: A brief introduction to piping model (Pipeline)

5.Buffer

The extended buffer provided by Netty has a number of advantages over NIO, and as a very important piece of data access, let's look at the features of buffer in Netty.

1.ByteBuf Read and write pointers

    • In the Bytebuffer, read and write pointers are position, and in Bytebuf, read and write pointers are Readerindex and writerindex, the intuitive look bytebuffer only a pointer to achieve the function of two pointers, saving variables, However, the flip method must be called when the read-write state of the Bytebuffer is switched, and the contents of the Buffe must be read before the next write, and then the clear method is called. Before each reading, call Flip, write before the call clear, which undoubtedly brings tedious steps to development, and the content is not read is not written, this is very inflexible. In contrast, we look at Bytebuf, read only rely on the readerindex pointer, when writing only rely on the Writerindex pointer, do not need to call the corresponding method before each read and write, and there is no need to read the limit.

2.0 copies

    • The Netty receive and send Bytebuffer uses direct buffers to read and write sockets using out-of-heap directly memory, without having to make two copies of the byte buffer. If the socket is read and written using traditional heap memory (heap buffers), the JVM copies the heap memory buffer into direct memory before it is written to the socket. Compared to out-of-heap direct memory, the message has a memory copy of the buffer one more time during the sending process.
    • Netty provides a combination of buffer objects that can aggregate multiple Bytebuffer objects, allowing the user to manipulate the combo buffer as easily as a buffer, avoiding the traditional way to merge several small buffer into a large buffer with a memory copy.
    • The Netty file transfer uses the Transferto method, which can directly send data from the file buffer to the target channel, avoiding the traditional memory copy problem caused by the cyclic write mode.

3. Reference counting and pooling technology

    • In Netty, each applied buffer may be a valuable resource for netty, so in order to gain more control over the application and recovery of memory, Netty itself implements memory management based on the reference counting method. Netty use of buffer is based on direct memory (Directbuffer), which greatly improves the efficiency of I/O operations, whereas Directbuffer and Heapbuffer have a natural disadvantage in addition to the high efficiency of I/O operations. That is, for the Directbuffer application compared to heapbuffer efficiency is lower, so netty combined with reference counting to achieve polledbuffer, that is, the use of pooling, when the reference count equals 0, Netty the buffer is recycled into the pool, The next time you apply for buffer, no time will be reused.
Summarize

Netty is essentially the implementation of the reactor pattern, selector as a multiplexer, eventloop as a transponder, pipeline as an event handler. However, unlike the general reactor, Netty is implemented using serialization, and the responsibility chain pattern is used in pipeline.

The buffer in the Netty has been optimized in relation to the buffer in NiO, which greatly improves the performance.

Analysis of Netty principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.