Simple RPC framework-Business thread pool and rpc framework Thread Pool
Netty thread model
Netty's thread model is mainly based on React. Due to different application scenarios, multiple versions have been developed.
Single thread mode
That is, a thread is used to receive service requests and execute IO operations. Because I/O multiplexing is used, therefore, the single thread mode can solve some of the problems in scenarios where the request volume is small.
Single-receive multi-working thread mode
When the number of requests increases, the original thread processing all IO operations becomes increasingly unable to support the corresponding performance indicators, so the concept of a working thread pool is mentioned, at this time, the receiving service request is still a thread. After receiving the request, the receiving thread delegates the request to the working thread pool, and obtains a thread from the thread pool to execute the user request.
Multi-receive multi-working thread mode
When the number of requests increases, a single thread that receives service requests cannot process connections from all clients. Therefore, the thread pool can be extended to receive service requests, multiple Threads are responsible for receiving client connections at the same time.
RPC service thread
All of the above are Netty's own thread models, which are constantly developed with the increasing request volume. While RPC requests are mainly used to process business logic for application systems, such services may be computing-intensive or IO-intensive. For example, most applications are accompanied by database operations, redis or other network services. If such time-consuming IO operations exist in business requests, we recommend that you allocate tasks that process business requests to independent thread pools. Otherwise, netty's own threads may be blocked.
Division of labor between the receiving and working threads
- The receiving request thread is mainly responsible for creating links and then delegates requests to the working thread.
- The worker thread is responsible for encoding, decoding, reading, and IO operations.
Solution implementation
Currently, RPC is implemented in the Multi-receive and multi-working thread mode. The port is bound on the server as follows:
public void bind(ServiceConfig serviceConfig) { EventLoopGroup bossGroup = new NioEventLoopGroup(); EventLoopGroup workerGroup = new NioEventLoopGroup(); try { ServerBootstrap bootstrap = new ServerBootstrap(); bootstrap.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .childHandler(this.rpcServerInitializer) .childOption(ChannelOption.SO_KEEPALIVE,true) ; try { ChannelFuture channelFuture = bootstrap.bind(serviceConfig.getHost(),serviceConfig.getPort()).sync(); //... channelFuture.channel().closeFuture().sync(); } catch (InterruptedException e) { throw new RpcException(e); } } finally { bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); } }
BoosGroup is a group
WorkerGroup is a set of specific IO operations
To add a business thread, you only need to delegate the handle operation to the thread pool. Here, you need to define the interface for expansion:
Define thread pool Interface
public interface RpcThreadPool { Executor getExecutor(int threadSize,int queues);}
Implement fixed-size Thread Pool
Refer to dubbo Thread Pool
@Qualifier("fixedRpcThreadPool")@Componentpublic class FixedRpcThreadPool implements RpcThreadPool { private Executor executor; @Override public Executor getExecutor(int threadSize,int queues) { if(null==executor) { synchronized (this) { if(null==executor) { executor= new ThreadPoolExecutor(threadSize, threadSize, 0L, TimeUnit.MILLISECONDS, queues == 0 ? new SynchronousQueue<Runnable>() : (queues < 0 ? new LinkedBlockingQueue<Runnable>() : new LinkedBlockingQueue<Runnable>(queues)), new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { //... } }); } } } return executor; }}
Episode:
A friend suddenly asked the coreSize in the java thread pool? I was short-circuited, because I didn't usually write many threads. I thought of a large number of database thread pools, but the parameters in them were quite impressive, but I couldn't think of a coreSize. Later, I took a closer look at some parameters of the thread pool. Now we can take this opportunity to look at it again to avoid further short-circuit.
Thread Pool Factory
When multiple thread pools are implemented, the thread pool name is used to dynamically select the thread pool.
@Componentpublic class RpcThreadPoolFactory { @Autowired private Map<String,RpcThreadPool> rpcThreadPoolMap; public RpcThreadPool getThreadPool(String threadPoolName){ return this.rpcThreadPoolMap.get(threadPoolName); }}
Modify the channelRead0 method of ChannelHandle
Wrap the method body into a Task and hand it over to the thread pool for execution.
@Overrideprotected void channelRead0(ChannelHandlerContext channelHandlerContext, RpcRequest rpcRequest) { this.executor.execute(new Runnable() { @Override public void run() { RpcInvoker rpcInvoker=RpcServerInvoker.this.buildInvokerChain(RpcServerInvoker.this); RpcResponse response=(RpcResponse) rpcInvoker.invoke(RpcServerInvoker.this.buildRpcInvocation(rpcRequest)); channelHandlerContext.writeAndFlush(response); } });}
Problem
At present, there is no pressure test, so there is no clear data comparison.
Source Code address
Https://github.com/jiangmin168168/jim-framework