- Technical point Description
There is a lot of code on multithreading in Netty (the implementation of the Netty framework itself is an asynchronous processing mechanism), which is only a detailed explanation of the functionality of the execution package.
The following is the directory structure for the entire package:
The call relationships in the package are as follows:
- Implementation scenarios
- Reference Source Package
The following is an analysis of the source code in this package (note that the next four classes are the most important classes in this package)
- Channeleventrunnablefilter
This interface defines an abstract method:
Boolean filter (Channeleventrunnable event);
Returns true if the incoming event was handled by executor.
- Channelupstreameventrunnablefilter
This class implements the Channeleventrunnablefilter interface to determine whether an incoming channeleventrunnable instance belongs to the Channeldownstreameventrunnable class
- Channeldownstreameventrunnablefilter
This class implements the Channeleventrunnablefilter interface to determine whether an incoming channeleventrunnable instance belongs to the Channeldownstreameventrunnable class
- Channeleventrunnable
This abstract class implements the Runnable, estimatableobjectwrapper two interfaces
There is a run method in this class:
Public Final void Run () {
Try {
PARENT. Set (executor);
Dorun ();
} finally {
PARENT. Remove ();
}
}
protected Abstract void dorun ();
Where parent is a variable of type threadlocal<executor>, for internal use only, tells Executor that the current worker gets to a worker thread.
where Dorum (); The method is an abstract method that needs to be handled differently depending on the business (often publishing upstream data sendupstream or downlink data senddownstream).
Implements the Dorun (); The class of the method is channelupstreameventrunnable and channeldownstreameventrunnable
- Channelupstreameventrunnable
Inherits from the Channeleventrunnable abstract class, overriding the Dorun () method
- Channeldownstreameventrunnable
Inherits from the Channeleventrunnable abstract class, overriding the Dorun () method
- Chainedexecutor
This is a special executor, this executor can be used to connect multiple executors and Channeleventrunnablefilter together.
The more important methods in this class are:
Public Chainedexecutor (channeleventrunnablefilter filter, Executor cur, Executor next) {
...
}
Create an Chainedexecutor instance according to the Channeleventrunnablefilter instance filter given, if the filter is an instance of Channeleventrunnablefilter, Call the current executor instance cur execution, otherwise call the next instance to execute.
- Executionhandler
The function of this class is to forward a channelevent to executor to perform multithreaded operations.
This class is used when our custom Channelhandler takes a long time to perform blocking operations, or when our Channelhandler accesses non-CPU business logic processing units such as databases. When you do this, if you do not use Executorhandler in our pipeline, and the process is relatively long, it is possible that the current I/O thread cannot perform I/O operations, and there are some unexpected anomalies.
In most cases, Executorhandler will be used in conjunction with Orderedmemoryawarethreadpoolexecutor, This is because doing so guarantees the order in which events are executed and can effectively prevent outofmemoryerror in the case of large loads
Here is the key code for the call:
Set Executionhandler:
Static Executionhandler Executionhandler = new Executionhandler (
New Orderedmemoryawarethreadpoolexecutor (16, 1048576, 1048576));
Factory settings when starting a service
Tcpserver_bootstrap.setpipelinefactory (New Tcpserverpipelinefactory (Executionhandler));
Settings for Pipelinefactory:
Public class Tcpserverpipelinefactory implements channelpipelinefactory {
Private Final Executionhandler Executionhandler;
Public Tcpserverpipelinefactory (Executionhandler executionhandler) {
this. Executionhandler = Executionhandler;
}
@Override
Public Channelpipeline Getpipeline () throws Exception {
Channelpipeline pipeline = Channels. Pipeline ();
Pipeline.addlast ("Decoder", New Stringdecoder (Charsetutil.utf_8));
Pipeline.addlast ("encoder", New Stringencoder (Charsetutil.utf_8));
Pipeline.addlast ("Decoder", new Lengthfieldbasedframedecoder (Integer. Max_value, 0, 4, 0, 4));
Pipeline.addlast ("encoder", New Lengthfieldprepender (4, false));
Pipeline.addlast ("Global_traffic_shaping", New Channeltrafficshapinghandler (New Hashedwheeltimer ()));
Pipeline.addlast ("handler", this. Executionhandler);
Pipeline.addlast ("Handler", new Tcpserverhandler ());
return pipeline;
}
When the service is closed, be aware that:
Public Static void Tcpservershutdown () {
tcpserver_bootstrap. releaseexternalresources ();
Executionhandler. releaseexternalresources ();
LOGGER. info ("TCP service is Off");
}
For the source code analysis found that the main function of the two methods are as follows:
Public void Handleupstream (
Channelhandlercontext context, channelevent e) throws Exception {
if (Handleupstream) {
Executor.execute (new channelupstreameventrunnable (context, E, executor));
} Else {
Context.sendupstream (e);
}
}
Public void Handledownstream (
Channelhandlercontext CTX, channelevent e) throws Exception {
Check if the read was suspend
if (!handlereadsuspend (CTX, E)) {
if (Handledownstream) {
Executor.execute (new channeldownstreameventrunnable (CTX, E, executor));
} Else {
Ctx.senddownstream (e);
}
}
}
In the processing of the downstream data when the executor, the use of multi-threaded execution, the specific use of which kind of executor, commonly used which executor, as well as a variety of executor differences will be described in the following respectively.
- Memoryawarethreadpoolexecutor
Inherit from Threadpoolexecutor, with the following features:
When there are too many tasks in the queue, Threadpoolexecutor blocks the submission of the task, at which point each channel and the limition of each executor will work.
When a task (such as Runnable) is committed, Memoryawarethreadpoolexecutor calls Objectsizeestimator.estimatesize (Object) To get the estimated size of the task, in bytes, to calculate the memory cost of the task that has not been performed.
When the total size of this non-performing task exceeds the threshold of the channel or executor's executable size, any execute (Runnable) will block (hang) until the total processing size is below the threshold and the task in the queue is not completed.
Override task size estimation strategy:
Although the default implementation class has done its best to estimate the size of the location type object, however, we still want to avoid the wrong estimate task size, so that A good idea is to replace defaultobjectsizeestimator with a Objectsizeestimator implementation class.
Application Scenarios:
1. Separate the memoryawarethreadpoolexecutor from the Executionhandler
2. The type of the task being submitted is not channeleventrunnable, or the messageevent message type in channeleventrunnable is not channelbuffer
The following is a specific rewrite of the implementation class for Objectsizeestimator (this code is untested)
public class Myrunnable implements Runnable {
Private final byte[] data;
Public myrunnable (byte[] data) {
This.data = data;
}
public void Run () {
Process ' data '.
}
}
public class Myobjectsizeestimator extends Defaultobjectsizeestimator {
@Override
public int estimatesize (Object o) {
if (o instanceof myrunnable) {
Return ((myrunnable) o). Data.length + 8;
}
return Super.estimatesize (o);
}
}
Threadpoolexecutor pool = new Memoryawarethreadpoolexecutor (
65536, 1048576, Timeunit.seconds,
New Myobjectsizeestimator (),
Executors.defaultthreadfactory ());
Pool.execute (new myrunnable (data));
Note: This executor does not maintain channelevent order (async) because the channel is the same, and the solution to this problem is explained later in this article.
The following are the common methods in this category for source code analysis:
constructor with the most parameters:
Memoryawarethreadpoolexecutor (int corepoolsize, long maxchannelmemorysize, long maxtotalmemorysize, long KeepAliveTime, Timeunit unit, Objectsizeestimator Objectsizeestimator, threadfactorythreadfactory) {
...
}
Parameters:
Corepoolsize maximum value of the active thread
Maxchannelmemorysize The maximum total size of events in the queue in each channel. 0 = Invalid
Maxtotalmemorysize The maximum total size of events in the queue in the current channel. 0 = Invalid
How long after the KeepAliveTime thread is inactive and automatically shuts down, default is 30
Unit Keep-Active time unit, default is Timeunit.seconds
Threadfactory thread factory for current thread pool, default to Executors.defaultthreadfactory ()
Objectsizeestimator The estimated object size of the current thread pool, default to New Defaultobjectsizeestimator ()
- Orderedmemoryawarethreadpoolexecutor
This class inherits the Memoryawarethreadpoolexecutor, and the channelevent events from the same channel in Memoryawarethreadpoolexecutor are not improved in sequential execution. Enable it to be executed sequentially. Almost all of the functionality in this class is inherited from his parent class.
Channelevent Execution Order
When Memoryawarethreadpoolexecutor is used, the order in which events are processed may be this:
--------------------------------> Timeline-------------------------------->
Thread X:---Channel A (event 2)---channel A (event 1)--------------------------->
Thread Y:---Channel A (event 3)---Channel B (event 2)---Channel B (event 3)--->
Thread Z:---Channel B (event 1)---Channel B (event 4)---channel A (event 4)--->
The order in which the event is processed after Orderedmemoryawarethreadpoolexecutor is changed to this:
-------------------------------------> Timeline------------------------------------>
Thread X:---Channel A (Event A1)--. .--Channel B (event B2)---Channel B (event B3)--->
\ /
X
/ \
Thread Y:---channel B (event B1)--"channel A (event A2)---channel A (event A3)--->
However, this also has a disadvantage, when the first channelevent in the same channel is not finished, then the next channelevent will not be executed.
There is also a point to note: Not that channela this call is THREADX, the next time the channelevent will still call THREADX, this cannot be guaranteed.
A different way to maintain the order of event execution for each channel
In Orderedmemoryawarethreadpoolexecutor, Executor uses the channel as the primary key to maintain the order of execution for each channel, however, we can modify the primary key to execute in order for the other values. For example: The IP address of the remote channel.
The specific implementation method (code) is as follows:
public class Remoteaddressbasedomatpe extends Orderedmemoryawarethreadpoolexecutor {
... Constructors ...
@Override
Protected Concurrentmap<object, executor> Newchildexecutormap () {
The default implementation returns a special Concurrentmap that
Uses identity comparison only (see IDENTITYHASHMAP).
Because SocketAddress does not work with identity comparison,
We need to employ more generic implementation.
return new Concurrenthashmap<object, executor>
}
Protected Object Getchildexecutorkey (Channelevent e) {
Use the IP of the remote peer as a key.
Return ((inetsocketaddress) E.getchannel (). getremoteaddress ()). GetAddress ();
}
Make public so, can call from anywhere.
public boolean removechildexecutor (Object key) {
Super.removechildexecutor (key);
}
}
Note: It is important to be aware of the memory leak problem with executor map in subclasses, and be sure to call the Removechileexecutor (Object) method when the end of the life cycle of the entire primary key (such as all connections from the same IP is closed). Keep in mind that this primary key may also appear after you call the Removechileexecutor (Object) method (such as removing that IP may also have a new connection from that IP), and if you are not sure if this happens, you can periodically executor from the subclass Modify the old unused or invalid key in the map. The solution is as follows:
Remoteaddressbasedomatpe executor = ...;
On every 3 seconds:
for (Iterator<object> i = Executor.getchildexecutorkeyset (). Iterator; I.hasnext ();) {
inetaddress IP = (inetaddress) i.next ();
If (there is no active connection from ' IP ' now &&
There have been no incoming connection from ' IP ' for last minutes) {
I.remove ();
}
}
If the maximum value of the expected key is smaller and more deterministic, you can use a "weak" map instead of managing the lifecycle of your key (such as Concurrentweakhashmap or synchronized weakhashmap).
Note: A "weak" map means that the map is emptied only when the virtual machine is low on memory, and normally does not take any active processing
This class is able to perform channelevent source analysis from the same channel sequentially:
This class defines a concurrentmap<object, executor> type of global variable childexecutors, whose key holds the channel.
protected Object Getchildexecutorkey (channelevent e) {
return E.getchannel ();
}
Its value is stored in the executor.
The execute operation is performed sequentially, based on the map lookup.
- Ordereddownstreamthreadpoolexecutor
The previous research is aimed at upstream data (the client sends data to the server segment, the execution order of data processing), this class is mainly for the downstream data processing, so that the downlink can be issued in accordance with the order. This class inherits the Orderedmemoryawarethreadpoolexecutor class, with almost all the same functionality as the Orderedmemoryawarethreadpoolexecutor class.
- Demo implementation
- Socketserver
Static Executionhandler Execution_up_handler = new Executionhandler (
New Orderedmemoryawarethreadpoolexecutor (+, 1048576, 1048576),false,true);
Static Executionhandler Execution_down_handler = new Executionhandler (
New Ordereddownstreamthreadpoolexecutor (+),true,false);
/**tcp Way */
Static ChannelFactory tcpchannel_factory = new nioserversocketchannelfactory (
Executors. Newcachedthreadpool (),
Executors. Newcachedthreadpool ());
/**tcp Way */
Static Serverbootstrap tcpserver_bootstrap = new serverbootstrap (tcpchannel_factory);
Public Static void Tcpserverstartup () {
tcpserver_bootstrap. Setpipelinefactory (newtcpserverpipelinefactory (execution_up_handler ,Execution_down_handler));
tcpserver_bootstrap. SetOption ("Child.tcpnodelay", true);
tcpserver_bootstrap. SetOption ("Child.keepalive", true);
tcpserver_bootstrap. SetOption ("Reuseaddress", true);
LOGGER. Info ("SERVER_NAME:" +constants. server_name);
LOGGER. Info ("Tcpserver_port:" +constants. Tcpserver_port);
tcpserver_bootstrap. Bind (newinetsocketaddress (Constants. server_name, Constants. tcpserver_port));
LOGGER. info ("TCP service started ....");
}
Public Static void Tcpservershutdown () {
tcpserver_bootstrap. releaseexternalresources ();
Execution_up_handler. releaseexternalresources ();
Execution_down_handler. releaseexternalresources ();
LOGGER. info ("TCP service is Off");
}
- Tcppipelinefactory
Channelpipeline pipeline = Channels. Pipeline (
New Lengthfieldbasedframedecoder (Integer. Max_value, 0, 4, 0, 4),
New Lengthfieldprepender (4, false),
this. Execution_up_handler,
this. Execution_down_handler,
New Tcpserverhandler ());
return pipeline;
- Clientpipelinefactory
Channelpipeline pipeline = pipeline();
Pipeline.addlast ("Decoder", new Lengthfieldbasedframedecoder (
Integer. Max_value, 0, 4, 0, 4));
Pipeline.addlast ("Encoder", new Lengthfieldprepender (4, false));
Pipeline.addlast ("Handler", new ClientHandler ());
return pipeline;
Netty in the execution package function detailed