Recently saw an intranet ATA on a broken network failure when the Mtop trigger the Tomcat high concurrency scenario bug troubleshooting and repair (has been adopted by Apache), aroused my curiosity, I feel the original author corresponding to the bottom very understanding, writing is very complex. The threading model for Tomcat was not very clear, but it was the most commonly used server of our day, so I took a tutorial on its threading model. I. How Tomcat supports request handling
Tomcat supports three processing of receive requests: BIO, NIO, APR
Bio mode: Blocking I/O operations, indicating that Tomcat is using traditional Java I/O operations (that is, the Java.io package and its sub-packages). TOMCAT7 The following versions are run in bio mode by default, because each request is created with one thread to handle, the thread overhead is high, it cannot handle highly concurrent scenarios, and the performance is minimal in three modes. Start Tomcat to see the following log, which indicates the use of the Bio mode:
NiO mode: A new I/O operation (i.e. the Java.nio package and its child packages) provided by Java SE 1.4 and subsequent versions. is a buffer-based Java API that provides non-blocking I/O operations with better concurrency performance than traditional I/O operations (bio). It is easier to make Tomcat run in NiO mode before Tomcat 8, just configure the following in the Tomcat installation directory/conf/server.xml file:
<connector port= "8080" protocol= "http/1.1" connectiontimeout= "20000" redirectport= "8443"/>
Modified into
<connector port= "8080" protocol= "Org.apache.coyote.http11.Http11NioProtocol" connectiontimeout= "20000" redirectport= "8443"/>
Tomcat8 or above, the NIO mode is used by default and no additional modifications are required
Apr mode: Simple understanding, is to solve the asynchronous IO problem from the operating system level, greatly improve the server processing and response performance, is also the preferred mode of tomcat running high concurrency applications.
Enabling this mode is a bit cumbersome and requires some dependent libraries to be installed, as an example of Tomcat-8.0.35 in the CentOS7 mini Environment, which describes installation steps:
APR 1.2+ Development Headers (Libapr1-dev package)
OpenSSL 0.9.7+ Development Headers (Libssl-dev package)
JNI He Aders from Java compatible JDK 1.4+
GNU Development environment (GCC, make)
two. Tomcat's Nioendpoint
Let's start with a brief review of current general NIO server-side implementations, drawing on Infoq's article Netty series of Netty in the threading model
One or more acceptor threads, each with its own selector,acceptor is only responsible for the new connection, and once the connection is established, the connection is registered to other worker threads.
Multiple worker threads, sometimes called IO threads, are specifically responsible for IO read and write. One implementation is that, like Netty, each worker thread has its own selector that can handle the IO read-write events of multiple connections, each of which belongs to a thread. Another way to achieve this is to have a dedicated thread responsible for IO event monitoring, these threads have their own selector, once the monitoring of IO read and write events, not as the first implementation of the same way (to perform IO operations), Instead, the IO operation is encapsulated as a runnable to the worker thread pool for execution, in which case each connection may be manipulated concurrently by multiple threads, but the first concurrency increases, but it can also lead to multithreading problems, which are more cautious in handling. Tomcat's NIO model is the second.
This requires a detailed understanding of the nioendpoint implementation of Tomcat. A graph in which the mtop triggers a tomcat high concurrency scenario with a reference to the bug in the case of a broken network failure (already accepted by Apache)
This diagram outlines Nioendpoint's approximate execution flowchart, which is not reflected by the worker thread, which is continuously executing IO read-write events as a thread pool, which is socketprocessor (a runnable). That is, the poller here only listens to the socket IO event, and then encapsulates the socketprocessor to the worker thread pool for processing. Below we will introduce in detail the next nioendpoint in the acceptor, Poller, Socketprocessor.
The main processes in which they handle client connections are shown in the figure:
Acceptor and workers in the figure are in the form of thread pool, Poller is a single thread. Note that, as with the bio implementation, the default state is that <EXECUTOR> is not configured in Server.xml, and is run as a worker thread pool if <EXECUTOR> is configured, based on the Java concurrent Series of Java.util.concurrent.ThreadPoolExecutor thread pool runs.
Acceptor
Receive socket thread, although this is based on NIO connector, but in the receiving socket or the traditional serversocket.accept () way, get Socketchannel object, It is then encapsulated in a Tomcat implementation class Org.apache.tomcat.util.net.NioChannel object. The Niochannel object is then encapsulated in a Pollerevent object and the Pollerevent object is pressed into the events queue. Here is a typical producer-consumer model, with queue communication between acceptor and Poller threads, Acceptor is the producer of events queue, and Poller is the consumer of events queue.
Poller
A selector object is maintained in the Poller thread, and NiO is based on selector to complete the logic. In connector and more than one selector, in the socket read and write data, in order to control the timeout also has a selector, in the following Blockselector introduced. The Poller thread can be used to maintain this selector label as the main selector. Poller is the main thread implemented by NIO. First as the consumer of the events queue, the Pollerevent object is removed from the queue and then the channel in this object is registered with the Op_read event in the main selector, and then the main selector performs a select operation. Iterate through the socket that can read the data, get the available worker threads from the worker thread pool, and pass the socket to the worker. The entire process is typical of NIO implementations.
Worker
After the worker thread gets the socket that poller passed in, it wraps the socket in the Socketprocessor object. The Http11nioprocessor object is then removed from the Http11connectionhandler, and the Coyoteadapter logic is invoked from the http11nioprocessor, just like the bio implementation. In the worker thread, the HTTP request is read from the socket, parsed into a HttpServletRequest object, dispatched to the appropriate servlet, and the logic is completed, and the response is sent back to the client via the socket. In the process of reading data from the socket and writing data to the socket, there is no registration of the Op_read or Op_write event to the main selector, as in typical non-blocking NIO, but rather the completion of the read and write through the socket, which is blocked. However, the selector mechanism of NIO is used in timeout control, but this selector is not the main selector poller thread maintenance, but Blockpoller maintained in selector threads, called auxiliary selector. three. Tomcat8 control of concurrency parameters
The Tomcat version of this article is tomcat8.5. You can see the configuration parameters of tomcat8.5 here.
Acceptcount
The documentation is described as:
The maximum queue length for incoming connection requests if all possible request processing threads is in use. Any requests received if the queue is full would be refused. The default value is 100.
This parameter immediately involves a large piece of content: the detailed process of the TCP three handshake, which is discussed in detail later. It is simple to understand that the connection is temporarily present in this queue before being Serversocketchannel accept, Acceptcount is the maximum length of the queue. Serversocketchannel accept is to continuously remove the connection request from this queue. Therefore, when the Serversocketchannel accept is removed, it is possible to cause the queue backlog, once the connection is full, it is rejected.
Acceptorthreadcount
The documentation is described below
The number of threads to is used to accept connections. Increase this value on a multi CPU machine, although you would never really need more than 2. Also, with a lot of non keep alive connections, you might want to increase this value as well. Default value is 1.
The acceptor thread is only responsible for removing requests from the above queue that have established a connection. When booting with a serversocketchannel listening on a connection port such as 8080, you can have multiple acceptor threads concurrently calling the above Serversocketchannel's accept method to obtain a new connection. The parameter Acceptorthreadcount actually uses the number of acceptor threads.
MaxConnections
The documentation is described below
The maximum number of connections that the server would accept and process at any given time. When this number has been reached, the server would accept, but not process, one further connection. This additional connection is blocked until the number of connections being processed falls below maxconnections at which Point the server would start accepting and processing new connections again. Note that once the limit have been reached, the operating system may still accept connections based on the Acceptcount sett Ing. The default value varies by connector type. For NIO and NIO2, the default is 10000. For apr/native, the default is 8192.
Note that for apr/native on Windows, the configured value'll be reduced to the highest multiple of 1024x768 that's less tha n or equal to MaxConnections. This is the done for performance reasons. If set to a value of of-1, the MaxConnections feature is disabled and connections be not counted.
This is Tomcat. A control over the number of connections, which is the limit of the maximum number of connections. Once the current number of connections has been found to exceed a certain number (NiO by default is 10000), the above acceptor thread is blocked, that is, no longer executing the Serversocketchannel accept method to get the established connection from the queue. However, it does not prevent new connections from being established, and the process of establishing a new connection is not acceptor controlled, acceptor is simply getting the newly established connection from the queue. So when the number of connections has exceeded maxconnections, it is still possible to establish a new connection, stored in the above acceptcount size of the queue, the queue inside the connection is not acceptor acquired, is in the connection established but not processed state. When the number of connections is less than maxconnections, the acceptor thread is no longer blocked and continues to call Serversocketchannel's accept method to continue acquiring new connections from the Acceptcount-sized queue. The IO events for these new connections are now processed.
MaxThreads
The documentation is described below
The maximum number of request processing threads to being created by this Connector, which therefore determines the maximum n Umber of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor are associated with this connector, this attribute is ignored as the connector would execute tasks using the Executor rather than an internal thread pool.
This simple understanding is counted as the number of threads for the above worker. They are dedicated to handling IO events, which are 200 by default.