1. Netty Introduction
Netty is a high-performance, asynchronous event-driven NIO framework based on the API implementations provided by Java NIO. It provides support for TCP, UDP, and file transfer, and as an asynchronous NIO framework, all IO operations of Netty are asynchronous and non-blocking, and through the Future-listener mechanism, users can easily obtain or obtain IO results via a notification mechanism. As the most popular NIO framework, Netty has been widely used in the field of Internet, large data distributed computing, gaming industry and communication industry, and some well-known open source components are also based on the Netty NIO framework. 2. Netty Threading Model
In the Java NiO aspect Selector provides the basis for the reactor pattern, Netty the efficient threading model with selector and reactor patterns. Let's take a look at the reactor mode:
2.1 Reactor Mode
Wikipedia explains this reactor model: "The reactor design mode is a event handling pattern for handling service requests delivered Co Ncurrently by one or more inputs. The service handler then demultiplexes the incoming requests and dispatches them to synchronously request associated ERs. ". First reactor mode is event-driven, one or more concurrent input sources, one server handler and multiple request handlers, this service Handler will synchronize the input of the requested multiplexing distribution to the corresponding request Handler. This can be seen in the following illustration:
Somewhat structurally similar to the producer and consumer models, where one or more producers put events into a queue, and one or more consumers proactively poll events from the queues, while the reactor mode does not have a queue to buffer, whenever an event is entered into the service After the handler, the service handler will take the initiative according to the different evnent types will be distributed to the corresponding request handler to deal with.
Implementation of 2.2 Reator model
With regard to the Java NIO construction reator model, Doug Lea gives a good description of the scalable IO in Java, where the interception of PPT illustrates the implementation of Reator mode
1. The first model of implementation is as follows:
This is the simplest reactor single-threaded model, because the reactor mode uses asynchronous Non-blocking IO, all IO operations are not blocked, theoretically a thread can handle all IO operations independently. At this point the reactor thread is versatile, responsible for the multiplexing of sockets, accept new connections, and distributing requests to the processing chain.
For some small-capacity scenarios, you can use a single-threaded model. But for high load, the application of large concurrency is not suitable, the main reasons are as follows: When a NIO thread handles hundreds of links simultaneously, the performance is not supported, even if the NIO thread has a CPU load of 100%, it cannot fully process the message when the NIO thread is overloaded, the processing speed slows down, Can result in a large number of client connection timeouts, which tend to be repeated after timeouts, adding to the load on NIO threads. Low reliability, an accidental dead loop of a thread, which can cause the entire communication system to be unavailable
In order to solve these problems, a reactor multithread model appears.
2.Reactor Multithreaded Model:
Compared with the previous pattern, the model uses multithreading (thread pool) in the process chain.
In most scenarios, the model can meet the performance requirements. However, in some special application scenarios, such as the server will be the client's handshake message Security authentication. In such scenarios, a single acceptor thread might have a problem with a poor performance. To solve these problems, a third kind of reactor threading model is produced.
3.Reactor Master-Slave model
Compared to the second model, the model divides the reactor into two parts, Mainreactor is responsible for monitoring the new connection of the server socket,accept, and assigns the established socket to subreactor. Subreactor is responsible for the multiple separation of connected sockets, read and write network data, the business processing function, which is thrown to the worker thread pool complete. In general, the number of subreactor can be equal to the number of CPUs.
2.3 Netty Model
2.2 said the reactor three models, then Netty is what kind of it. In fact, the Netty threading model is a variant of the reactor model, which is to remove the third variant of the thread pool, which is also the default mode for Netty NiO. Participants in the reactor mode in Netty mainly have the following components: Selector eventloopgroup/eventloop channelpipeline
Selector is the Selectablechannel multiplexer provided in NiO, acting as a demultiplexer role, which is no longer discussed here, with two other functions and its role in the reactor mode of Netty. 3.eventloopgroup/eventloop
When the system is running, it can cause additional performance loss if the thread context switch is frequent. Multithreaded concurrent execution of a business process, business developers also need to remain vigilant to thread safety, which data may be modified concurrently, how to protect. This not only reduces development efficiency, but also leads to additional performance losses.
In order to solve the above problem, Netty uses the serial design idea, from the message reading, the code as well as the subsequent handler execution, always by IO thread EventLoop is responsible, this unexpectedly the entire process does not carry on the thread context to switch, the data also does not face the risk which the concurrency modifies. This also explains why the Netty threading model removes the reactor master-slave model thread pool.
Eventloopgroup is a set of EventLoop abstractions, Eventloopgroup provides a next interface that can be used in a group of EventLoop to get one of the eventloop to handle tasks, What you need to know about Eventloopgroup here is that in Netty, we need Bosseventloopgroup and workereventloopgroup two eventloopgroup to work in Netty server programming. Typically a service port is a serversocketchannel corresponding to a selector and a eventloop thread, which means that the number of threads Bosseventloopgroup is 1. Bosseventloop is responsible for receiving the client's connection and handing Socketchannel to Workereventloopgroup for IO processing.
The EventLoop implementation acts as a distribution (Dispatcher) role in the reactor mode. 4.ChannelPipeline
Channelpipeline is actually acting as a request processor in the reactor mode.
The default implementation of Channelpipeline is that the defaultchannelpipeline,defaultchannelpipeline itself maintains a tail and head channelhandler that the user is not visible, They are located in the head and tail of the linked list queue respectively. The tail is in the higher part, and the head is in the direction of the network layer. There are two important interfaces in Netty about Channelhandler, Channelinboundhandler and Channeloutboundhandler. Inbound can be understood as the network data from the outside to the system internal, and outbound can be understood as network data from the system internal flow to the external system. The user-implemented Channelhandler can implement one or more of these interfaces as needed and place them in the pipeline list queue, Channelpipeline will find the corresponding handler according to the different IO event types. At the same time, the chain list queue is a variant of the responsibility chain pattern, and all handler that satisfy the event association are processed in the Top-down or bottom-up way.
Channelinboundhandler to the server from the client to deal with the message, generally used to execute a half packet/sticky packet, decoding, reading data, business processing, etc. channeloutboundhandler to the client from the server to handle the message, Generally used to encode, send messages to the client.
The following illustration shows a description of the Channelpipeline execution process:
More knowledge about Pipeline can be referred to: a brief discussion of Piping model (Pipeline) 5.Buffer
Netty offers a number of advantages to the extended buffer relative to NIO, and as a very important piece of data access, let's look at the characteristics of the buffer in the Netty.
1.ByteBuf read and write pointers in Bytebuffer, read and write pointers are position, while in Bytebuf, read and write pointers are Readerindex and Writerindex respectively, Intuitive look bytebuffer only one pointer to achieve the function of two pointers, save the variable, but when the read-write state switch for Bytebuffer must call the Flip method, and the next time before writing, you must read the contents of the Buffe, The clear method is called again. Call Flip before each read, and call clear before writing, which undoubtedly brings tedious steps to development, and the content is not readable and cannot be written, so it is very inflexible. In contrast, we look at the BYTEBUF, read only rely on the Readerindex pointer, write only rely on the Writerindex pointer, do not need each read and write before the corresponding method, and there is no need to read the limit.
2.0 Copy Netty receive and send Bytebuffer using direct buffers, use the outside of the heap directly memory to read and write socket, do not need to make two copies of the byte buffer. If the traditional heap memory (HEAP buffers) is used to read and write the socket, the JVM copies the heap memory buffer into direct memory before writing to the socket. Compared to the direct memory in the heap, the message has a memory copy of the buffer one more time during the sending process. Netty provides a combination of buffer objects, you can aggregate a number of Bytebuffer objects, users can operate a buffer as convenient for the combination of buffer operation, to avoid the traditional way through the memory copy of a few small buffer into a large buffer. The Netty file transfer uses the Transferto method, which can directly send data from the file buffer to the target channel, avoiding the problem of memory copy caused by the traditional cyclic write method.
3. Reference counting and pooling technology in Netty, each application of the buffer for Netty may be very valuable resources, so in order to obtain the memory of the application and recycling more control, Netty itself according to the reference counting method to achieve the memory management. Netty for the use of buffer is based on direct memory (Directbuffer) implementation, greatly improve the efficiency of I/O operations, but Directbuffer and heapbuffer in comparison with the high I/O operation efficiency is also a natural disadvantage, That is, for directbuffer applications compared to heapbuffer efficiency is lower, so Netty combined with the reference count to achieve polledbuffer, that is, the use of pooling, when the reference count equals 0, Netty will buffer recovery into the pool, The next time you apply for a buffer, no time will be reused. Summary
Netty is essentially the implementation of the reactor model, selector as a multiplexer, eventloop as a forwarder, and pipeline as an event handler. But unlike the general reactor, Netty uses serialization implementations and uses the responsibility chain model in pipeline.
The buffer in the Netty has been optimized for the buffer in NiO, which greatly improves the performance.