Netty Server Threading Model Overview

Source: Internet
Author: User

Everything starts with the serverbootstrap.
Serverbootstrap is responsible for initializing the session Netty server and starts listening for socket requests on the port.

Bootstrap bootstrap = new Serverbootstrap (
New Nioserversocketchannelfactory (
Executors.newcachedthreadpool (),//boss thread pool
Executors.newcachedthreadpool ()//worker thread pool
)
);
Bootstrap.setpipelinefactory (New Httpchannelpipelinefactory ());
Bootstrap.setoption ("Child.tcpnodelay", true);
Bootstrap.setoption ("Child.keepalive", true);
Bootstrap.bind (New Inetsocketaddress (Httpport));//Port Start monitoring

Serverbootstrap is instantiated with a serversocketchannelfactory. Serversocketchannelfactory has two options, one is nioserversocketchannelfactory and the other is oioserversocketchannelfactory. The former uses NIO, followed by normal blocking IO. They all require two thread pool instances to be initialized as parameters, one for the boss thread pool and one for the worker thread.

Serverbootstrap.bind (int) is responsible for binding the port, and when this method executes, Serverbootstrap can accept the socket connection on the specified port. A serverbootstrap can bind multiple ports.

Boss thread and worker thread
So to speak, Serverbootstrap listens to a port corresponding to a boss thread, which corresponds to one by one. For example, if you need Netty to monitor 80 and 443 ports, then there will be two boss threads responsible for processing socket requests from two ports respectively. After the boss thread accepts the socket connection, a channel is generated (an open socket corresponds to an open channel), and the channel is handed to the serverbootstrap specified during initialization. Serversocketchannelfactory to process, the boss thread continues to process the socket request.

Serversocketchannelfactory will find a worker thread from the worker threads pool to continue processing the request.
If it is oioserversocketchannelfactory, then all the socket messages on this channel, from the beginning to the channel (socket), are only handled by this particular worker. That is, an open socket corresponding to a specified worker thread, the worker thread in the case of the socket is not closed, but also only for the socket processing messages, unable to server his socket.

If it is nioserversocketchannelfactory, then each worker can serve a different socket or say that the channel,worker thread and channel no longer have a one by one correspondence relationship.
Obviously, nioserversocketchannelfactory only needs a small number of active worker threads and can handle many of the channel well. The oioserversocketchannelfactory, in turn, needs to serve with the worker thread that opens the channel equivalent.

Threads are a resource, so when Netty servers need to handle long connections, it is best to choose Nioserversocketchannelfactory, which avoids creating a large number of worker threads. When used as an HTTP server, it is also best to choose Nioserversocketchannelfactory, because modern browsers use the HTTP keepalive feature (which allows different HTTP requests from the browser to share a single channel), which is also a long connection.

Lifecycle of worker threads (Life circle)
When a channel has a message arriving or a message needs to be written to the socket, the worker thread takes one out of the thread pool. In the worker thread, the message is processed by a set channelpipeline. Channelpipeline is a bunch of sequential filter, which is divided into two parts: Upstreamhandler and Downstreamhandler. This article focuses on the threading model of Netty, so the contents of pipeline are described in simple detail.


A client-sent message is first processed by many upstreamhandler, and the resulting data is fed into the application's business logic handler, which is typically implemented with Simplechannelupstreamhandler.

public class simplechannelupstreamhandler{
public void messagereceived (Channelhandlercontext c,messageevent e) throws exception{
Business logic starts to blend in
}
}


For NiO when the Messagereceived () method executes, if no exception is generated, the worker thread executes, and it is recycled by the thread pool. Business logic Hanlder will pass the returned data to a specified sequence of downstreamhandler processing, the processed data will be written to the channel if necessary, and sent to the client via the bound socket. This process is done by a worker thread in another thread pool.

For Oio, from the beginning to the end, it is handled by a designated worker.

Reduce processing time for worker threads
The worker thread is a resource that is managed internally by Netty and is uniformly provisioned, so it is best to get the worker thread executed as soon as possible and return it to the thread pool for recycling. The majority of the worker threads are consumed in various handler of Channelpipeline, and in these handler are generally the handler that take charge of the application's business logic, It is usually the last upstreamhandler. So by handing over this part of the processing to another thread, we can effectively reduce the cycle time of the worker thread. There are generally two methods:

The Messagereceived () method opens a new thread to handle the business logic


public void messagereceived (Channelhandlercontext ctx, messageevent e) throws exception{
...
New Thread (...). Start ();
}

A new thread is opened in messagereceived () to continue processing the business logic, and the worker thread finishes executing messagereceived (). A more elegant approach is to construct another line pool commit the business logic processing task.

Using the Executionhandler with Netty frame
Basic methods of Use:


public class Databasegatewaypipelinefactory implements Channelpipelinefactory {

Private final Executionhandler Executionhandler;

Public databasegatewaypipelinefactory (Executionhandler executionhandler) {
This.executionhandler = Executionhandler;
}

Public channelpipeline getpipeline () {
return channels.pipeline (
New Databasegatewayprotocolenc oder (),
New Databasegatewayprotocoldecoder (),
Executionhandler,//Multiple pipeline must share the same Executionhandler
New Databasequeryinghandler ());//business logic Handler,io dense
}
}

to place a shared Executionhandler instance before the business logic handler, Note that Executionhandler must be shared between different pipeline. Its role is to automatically take a thread from a pool of threads that Executionhandler itself manages to handle the business logic handler behind it. The worker thread ends after Executionhandler, and it is recycled by the ChannelFactory worker threads pool.

It is constructed by Executionhandler (Executor Executor), and it is clear that Executor is the thread pool of the Executionhandler internal management. Netty provides us with two additional thread pools:
Memoryawarethreadpoolexecutor and Orderedmemoryawarethreadpoolexecutor, they are all under the Org.jboss.netty.handler.execution package.
Memoryawarethreadpoolexecutor ensure that the JVM does not cause memory overflow errors due to excessive threads, Orderedmemoryawarethreadpoolexecutor is a subclass of the previous thread pool, except Ensure that there is no memory overflow, but also to ensure that the Channel event processing order. You can view the API documentation, which is described in more detail.

Netty Server Start-up steps:
Code:


Constructs a server-side bootstrap instance and assigns a ChannelFactory implementation by constructing the method
The latter two parameters are the thread pool of boss and work, respectively.
Bootstrap serverbootstrap = new Serverbootstrap (
New Nioserversocketchannelfactory (
Executors.newcachedthreadpool (),
Executors.newcachedthreadpool ()));

Channelpipelinefactory for registered users
Serverbootstrap.setpipelinefactory (this.pipelinefactory);

Calling bind waits for the client to connect
((Serverbootstrap) serverbootstrap). bind (socketaddress);


Netty offers two models of NiO and bio, our primary concern is the NIO model:
NiO Treatment methods:
1.Netty use a boss thread to handle client access, create Channel
2. From the work thread pool (the number of work threads defaults to twice times the CPU cores) take a work thread to the boss to create a channel instance (the channel instance holds the Java network object)
3.WORK thread for data read-in (read to Channelbuffer)
4. Then trigger the corresponding event to pass to Channelpipeline for business processing (Channelpipeline contains a series of user-defined Channelhandler chains)

One thing to note is that the process of executing the entire Channelhandler chain is serial, and if the business logic (such as DB operations) is time consuming, it can cause the work thread to be unoccupied for a long time and ultimately affect the concurrency processing capability of the entire server. So in general we pool asynchronous processing of the Channelhandler call chain through the Executionhandler line, allowing the work thread to be freed when Executionhandler.
To solve this problem, add the following code:

Executionhandler Executionhandler =
New Executionhandler (
New Orderedmemoryawarethreadpoolexecutor (16, 1048576, 1048576));
Public Channelpipeline Getpipeline () {
Return Channels.pipeline (
New Databasegatewayprotocolencoder (),
New Databasegatewayprotocoldecoder (),
Executionhandler,//Must be shared
New Databasequeryinghandler ());
}


Netty provides two optional thread pool models for Executionhandler:
1) memoryawarethreadpoolexecutor
Through the use of thread pool memory control, you can control the executor in the upper limit of the task to be processed (when the upper limit, the subsequent task will be blocked), and can control the limit of the single channel backlog, to prevent memory overflow error;
2) Orderedmemoryawarethreadpoolexecutor
is a subclass of 1). In addition to the functionality of Memoryawarethreadpoolexecutor, it also guarantees the sequential nature of the event streams handled in the same channel, mainly the sequence of events that control the errors that may occur in the asynchronous processing mode of the event, but it does not guarantee that the same The events in the channel are executed in one thread, and there is no need to guarantee this.
Let's look at the notes in Orderedmemoryawarethreadpoolexecutor:

Thread X:---Channel A (Event A1)--. .--Channel B (event B2)---Channel B (event B3)--->
\ /
X
/ \
Thread Y:---channel B (event B1)--"channel A (event A2)---channel A (event A3)--->
Events that handle the same channel are executed serially, but multiple events of the same channel may be distributed to multiple threads in the thread pool, and different channel events can be processed concurrently, without affecting each other

And look at the notes in Memoryawarethreadpoolexecutor.
Thread X:---Channel A (event 2)---channel A (event 1)--------------------------->
Thread Y:---Channel A (event 3)---Channel B (event 2)---Channel B (event 3)--->
Thread Z:---Channel B (event 1)---Channel B (event 4)---channel A (event 4)--->
The same channel event, which does not guarantee the processing order, may be a line enters upgradeable processing channel a (event 3), and then another line friend processing channel a (event 2), if the business does not require the processing order of the event to be guaranteed, I think it's better to use memoryawarethreadpoolexecutor as much as possible.

Netty uses the standard Seda (staged Event-driven Architecture) architecture
The core idea of Seda is to divide a request processing process into stages, with different resources consumed by stag using a different number of threads to handle, and using event-driven asynchronous communication patterns between stag. Further, you can dynamically configure the number of threads in each stage to downgrade the run or denial of service when overloaded.

The type of event designed by Netty represents the various stages of network interaction, and each phase occurs, triggering the corresponding event and handing it over to channelpipeline for processing. Event handling begins with a static method call in the channels class.

static method of event flow in channels:
1. Firechannelopen
2. Firechannelbound
3. firechannelconnected
4. firemessagereceived
5. Firewritecompletelater
6. Firewritecomplete
7. Firechannelinterestchangedlater
8. Firechanneldisconnectedlater
9. firechanneldisconnected
Ten. Firechannelunboundlater
Firechannelunbound.
Firechannelclosedlater.
Firechannelclosed.
Fireexceptioncaughtlater.
Fireexceptioncaught.
Firechildchannelstatechanged.

Netty divides network events into two types:
1.Upstresam: Upstream, mainly by the network bottom feedback to Netty, such as messagereceived, channelconnected
2.Downstream: downlink, the framework itself initiated, such as Bind, write, connect, etc.

The Channelhandler of Netty is divided into 3 categories:
1. Handling only Upstream events: implementing the Channelupstreamhandler Interface
2. Handling only downstream events: implementing the Channeldownstreamhandler Interface
3. Simultaneous processing of upstream and downstream events: simultaneous implementation of Channelupstreamhandler and Channeldownstreamhandler interfaces
Channelpipeline maintains an ordered list of all Channelhandler, and when a Upstresam or downstream network event occurs, calls the channelhandler of the matching event type to process. Channelhandler itself can control whether to flow to the next Channelhandler in the call chain (Ctx.sendupstream (e) or Ctx.senddownstream (e)), which has the same benefit as a business Data decoder do not have to continue to flow to the next channelhandler when there is illegal data

Here is a picture of my random drawing:

More content http://san-yun.iteye.com/category/158002

Various performance problems http://san-yun.iteye.com/category/235285

Netty Server Threading Model Overview

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.