Netty Threading Model
Netty's threading model is primarily based on react, because it evolves multiple versions considering the different scenarios.
Single thread mode
That is, receiving service requests and performing IO operations are performed by a single thread, and because of the non-blocking IO operations such as IO multiplexing, single-threaded mode can also solve some of the scenario problems in the case of a small request Volume.
Single-receive multi-worker Threading Mode
When the request volume increases, the original one thread processing all IO operations become more and more unable to support the corresponding performance indicators, so referred to a work thread pool concept, at this time the receiving service request is a thread, the receiving request will be delegated to the thread to the next worker pool, Gets a thread from the thread pool to execute the user Request.
Multi-receive multi-worker Threading Mode
When the request volume increases further, a single receiving service request thread is unable to process all client connections, so the receiving service request is also expanded into a thread pool, where multiple threads are responsible for receiving connections from the Client.
RPC Business Thread
The above mentioned are Netty's own threading model, along with the growth of the request volume of the continuous development of the optimization strategy. RPC requests are primarily the business logic of the application, which can be computationally-intensive or io-intensive, as most applications are accompanied by database operations, redis, or other network Services. If there is such a time-consuming IO operation in the business request, it is recommended that the task that handles the business request be assigned to a separate thread pool, or the thread that Netty itself may be blocked.
Receive request thread and worker thread Division
- The receiving request thread is primarily responsible for creating the link and then delegating the request to the worker thread
- The worker thread is responsible for encoding and decoding read IO operations
Solution implementation
Currently I implement RPC is the Multi-receive multi-worker threading mode, on the server side is such a bound port:
public voidBindServiceConfigServiceconfig) {EventloopgroupBossgroup= New Nioeventloopgroup();EventloopgroupWorkergroup= New Nioeventloopgroup();Try{ServerbootstrapBootstrap= New Serverbootstrap(); Bootstrap.Group (bossgroup, workergroup). Channel (Nioserversocketchannel.Class). Childhandler ( this.Rpcserverinitializer). Childoption (channeloption. So_keepalive,true) ;Try{channelfutureChannelfuture=Bootstrap.Bind (serviceconfig.GetHost (), ServiceConfig.Getport ()).Sync ();//...Channelfuture.Channel ().Closefuture ().Sync (); }Catch(interruptedexceptionE) {Throw New rpcexception(e); } }finally{bossgroup.Shutdowngracefully (); Workergroup.Shutdowngracefully (); } }
Boosgroup is a set of services that are used to receive service Requests.
Workergroup is a set of specific operations that are responsible for IO
Adding a business thread requires only a further delegation of handle operations to the thread pool, where the interface needs to be defined in order to extend:
Defining the thread pool interface
public Interface Rpcthreadpool { Executorgetexecutor(intthreadsize,intqueues) ;}
Implementing a Fixed-size thread pool
Reference to the Dubbo thread pool
@Qualifier( "fixedrpcthreadpool" )@Component public class Fixedrpcthreadpool Implements Rpcthreadpool{Private ExecutorExecutor@Override public Executor Getexecutor(int threadsize,int Queues) {if(NULL==Executor) {synchronized( this) {if(NULL==Executor) {executor= New Threadpoolexecutor(threadsize, threadsize,0L,Timeunit. MILLISECONDS, queues== 0 ? New synchronousqueue<Runnable>():(queues< 0 ? New linkedblockingqueue<Runnable>(): New linkedblockingqueue<Runnable>(queues)),New Rejectedexecutionhandler() {@Override public void rejectedexecution(Runnable R,Threadpoolexecutor Executor) {//...} }); } } }returnExecutor }}
Small episode:
Remember one time a friend suddenly asked what the coresize in the Java thread pool meant? I immediately short-circuited, because usually also do not write multi-threading, think of peacetime use of more database thread pool, inside the parameter is more deep impression, but just can't think of a coresize. later, we looked carefully at some parameters of the thread Pool. Now take this opportunity to see more, so as not to short-circuit again.
Thread Pool Factory
When there are multiple thread pool implementations, the thread pool name is dynamically Selected.
@Component public class rpcthreadpoolfactory { @Autowired privatemap<String,rpcthreadpool> rpcthreadpoolmap; public Rpcthreadpool Getthreadpool (Stringthreadpoolname) { returnthis. Rpcthreadpoolmap. Get (threadpoolname);} }
Modifying the ChannelRead0 method of ChannelHandle
Wraps the method body into a task and gives it to the thread pool to Execute.
@Overrideprotected voidCHANNELREAD0 (Channelhandlercontextchannelhandlercontext,rpcrequestRpcrequest) { this.Executor.ExecuteNew Runnable() {@Override public void Run() {RpcinvokerRpcinvoker=Rpcserverinvoker. this.Buildinvokerchain (Rpcserverinvoker. this);RpcresponseResponse=(Rpcresponse) Rpcinvoker.InvokeRpcserverinvoker. this.Buildrpcinvocation (rpcrequest)); Channelhandlercontext.Writeandflush (response); } });}
Problem
There is currently a lack of pressure measurement, so there is no clear data comparison.
Source Address
Https://github.com/jiangmin168168/jim-framework
Simple RPC framework-business thread pool