Reactor threading model of Netty Learning and its application in Netty

Source: Internet
Author: User

Reprint: http://blog.csdn.net/u010853261/article/details/55805216

Speaking of Netty's threading model, our first response is the classic reactor threading model, so let's explore three classic reactor threading models:

One thing we need to understand here is that the reactor threading model is based on synchronous non-blocking IO implementations. The implementation of asynchronous non-blocking IO is the Proactor model.

This article mainly includes
(1) Reactor single-threading model

(2) Reactor multithreading model

(3) Master-slave reactor multithreading model

(4) Multithreaded model of Netty

1.Reactor Single Thread model

Reactor single-threaded model means that all IO operations are done on the same NIO thread, that is, the IO processing thread is single-threaded. The responsibilities of the NIO thread are:
(1) as a NIO server, receive TCP connections from clients;

(2) as a NIO client, initiate a TCP connection to the server;

(3) Read the communication to the end of the request or reply to the message;

(4) Send a message request or reply message to the communication peer.

The reactor single-threaded model diagram looks like this:

Reactor mode uses synchronous non-blocking IO (NIO), and all IO operations do not cause blocking, in theory a thread can handle all IO operations independently (selector proactively polls which IO operations are ready). From the schema level, a NIO thread can indeed fulfill its responsibilities, such as the Acceptor class receives a TCP request message from the client, and when the link is established successfully, the corresponding Bytebuffer is forwarded to the specified handler by dispatch, and the message is processed.

For some small-capacity scenarios, single-threaded models can be used, but for high-load, large concurrency scenarios, the main reasons are:
(1) A NIO thread handles tens of thousands of links, performance cannot be supported, even if the CPU load reaches 100%;

(2) When the NIO thread is overloaded, processing performance slows down, causing a large number of client connections to time out and then re-send requests, resulting in more backlog of outstanding requests, becoming a performance bottleneck.

(3) Low reliability, only one NIO thread, in case the thread suspended animation or into the dead loop, it is completely unusable, this is unacceptable.

Based on the question of appeal, a multithreaded model of reactor is proposed:

2.Reactor multithreaded Model

The biggest difference between the reactor multithreaded model and the single-threaded model is that the IO processing thread is no longer a thread, but a set of NIO processing threads. The principle is as follows:

The features of the reactor multithreaded model are as follows:
(1) A dedicated NIO thread--acceptor thread is used to listen to the server, receiving TCP connection requests from the client.

(2) Network IO Operations--Read and write operations are handled by a dedicated thread pool, which can be implemented using the JDK standard thread pooling, which consists of a task queue and n available threads that are responsible for reading, decoding, encoding, and sending.

(3) A NIO thread can process n links at the same time, but one link only corresponds to one NIO thread.

The reactor multithreaded model satisfies most scenarios, except for a few specific scenarios: a NIO thread that handles all client connection requests, but if the connection request contains authentication requirements (security authentication), there are performance issues in the millions scenario. Because the authentication itself consumes the CPU, in order to solve the performance problem in this scenario, a third threading model is produced: Reactor master-slave threading model.

3. Master-Slave reactor multithreading model

The main feature of the master-slave reactor threading model is that the server is no longer a separate NIO thread for receiving client connections, but rather a separate NIO thread pool. Acceptor receives a client TCP connection request and processing completes (possibly including access authentication), and then registers the newly created socketchannel on an IO processing thread of the IO thread pool (sub reactor) and processes the codec and read-write work. The acceptor thread pool is only responsible for client connection and authentication, and once the link connection is successful, the link is registered in the IO thread pool of the back-End Sub reactor. The threading model diagram is as follows:

The master-slave reactor model can be used to solve the problem that the server listener thread cannot effectively handle all client connections, which is also the threading model recommended by Netty.

4. Threading Model for Netty

Netty's threading model can be configured by setting the parameters of the startup class, setting different startup parameters, Netty support for reactor single-threaded models, multithreaded models, and master-slave reactor multithreaded models.

The server started with two nioeventloopgroup, one boss and one worker. In fact, they are two separate reactor thread pools, one for receiving client TCP connections, the other for processing IO-related read and write operations, or for executing system task, timed task.

The boss thread pool has the following responsibilities:
(1) Receive client connections, initialize channel parameters
(2) Notify Channelpipeline of the link state change time

The worker thread pool function is:
(1) Asynchronously reads the datagram of the communication peer, sending the read event to Channelpipeline
(2) Asynchronously sends a message to the communication peer, calling the Channelpipeline message sending interface
(3) Execute system call task;
(4) Perform task tasks on a timed mission;

By configuring the number of threads in the boss and worker thread pools and sharing the thread pool, Netty's threading model can be switched between single-threaded, multi-threaded, and master-slave threads.

To improve performance, Netty is unlocked in many places. For example, in the IO thread internal serial operations to avoid multi-threaded competition caused by performance problems. It appears that the serialization design seems to be low on CPU utilization, but by adjusting the thread parameters of the NIO thread pool, you can simultaneously start multiple serialized threads to run concurrently, and this local unlocked serial thread is designed to perform better.

After the Nettyd Nioeventloop reads the message, it calls the Channelpipeline Firechannelread (Object msg) directly, as long as the user does not actively switch threads, has been called by Nioeventloop with the user's handler, during which no thread switching, this serialization design avoids the multi-threaded operation caused by the lock competition, the performance angle is optimal.

5. Netty Threading Model Setup Best Practices

(1) Create two nioeventloopgroup, isolate the IO threads of NiO acceptor and NiO.

(2) Try not to start the user thread in Channelhandler (except after decoding, distributing the POJO message to the backend's business thread pool).

(3) The decoding should be placed in the handler called by the NIO thread and not decoded in the user thread.

(4) If the IO operation is very simple and does not involve complex business logic calculations, it is not possible to cause blocking disk operations, database operations, network operations, and so on, and you can then complete the business logic in the handler called by the NIO thread, and do not switch to the user thread.

(5) If the IO business operation is more complex, do not finish on the NIO thread, because blocking can cause the NIO thread to feign death and severely degrade performance. At this point, the Pojo can be encapsulated as a task, distributed to the business thread pool, processed by the business thread to ensure that NIO, the thread is released as soon as possible, and the rest of the IO operations are processed.

Reactor threading model of Netty Learning and its application in Netty

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.