Turn from
Http://zhanjindong.com
Recently has been addressing a problem on the line, performance is:
Tomcat will have a peak every morning, the peak of concurrency reached more than 3000, the final result is the Tomcat thread pool full, log see a lot of requests over 1s.
Server performance is good, the Tomcat version is 7.0.54, configured as follows:
<executor name= "Tomcatthreadpool" nameprefix= catalina-exec-"maxthreads=" minsparethreads= " <connector executor= "Tomcatthreadpool" port= "8084" protocol= "Org.apache.coyote.http11.Http11AprProtocol" connectiontimeout= "60000" keepalivetimeout= "30000" maxkeepaliverequests= "8000 " Maxhttpheadersize= "8192" uriencoding= "UTF-8" enablelookups= "false" acceptcount= "$" Disableuploadtimeout= "true" redirectport= "8443"/>
After the thread dump see actually in the runnable state of a few threads, most of the threads are in the timed_waiting state:
So everyone began to tangle about why threads were up to 3000, and found that even if the spikes were over the number of threads did not fall down.
The first thing we think about is:
The processing of the backend application is slow and "blocked" causes the number of front-end threads to rise up.
However, after optimizing a version of the line found that although the situation has improved, but the final thread pool will reach the maximum value of 3000.
================================== Split Line =========================================
The above is a large background, the middle of the process is omitted, directly with you say at present I get the conclusion:
1, the first is why the thread does not release the problem?
Simply say I verify that the tomcat (7.0.54) thread pool probably works
- When Tomcat starts without a request, the number of threads (all referring to the thread pool) is 0;
- Once requested, Tomcat initializes the number of threads set by Minsaprethreads;
- Tomcat does not actively shrink the thread pool unless it is determined that no request is made, and Tomcat shrinks the thread pool to the size of the minsparethreads setting;
- The previous version of TOMCAT6 has a maxsparethreads parameter, but has been removed in 7, so Tomcat does not release more than idle threads as long as there is only one request in front of it.
As for why Tomcat removed the maxsparethreads parameter, I think it is also due to performance considerations, the constant shrinking thread pool performance is certainly not high, while the extra threads are waiting for the benefit that a new request comes in immediately can be processed.
- Also, a large number of Tomcat threads are waiting without consuming the CPU, but consume some JVM storage.
Add: There is a bit of a problem with the red line above, further verifying that this behavior is found only when using keep-alive (both client and server support), and if the client is not using keep-alive then the thread will be recycled as the TCP connection is released.
Keep-alive related parameters in Tomcat:
Maxkeepaliverequests:
The maximum number of HTTP requests which can be pipelined until the connection are closed by the server. Setting this attribute to 1 would disable http/1.0 keep-alive, as well as http/1.1 keep-alive and pipelining. Setting This to-1 would allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.
KeepAliveTimeout:
The number of milliseconds this Connector would wait for another HTTP request before closing the connection. The default value is to use the value of that have been set for the connectiontimeout attribute. Use a value of-1 to indicate no (i.e. infinite) timeout.
2. Why is the thread pool full?
This is the core of my current struggle. Whether or not the performance of the application is slow, I now conclude that there is a relationship, but the key is concurrency.
- The number of threads in the tomcat thread pool is related to your momentary concurrency, such as MaxThreads set to 1000, and when the instant concurrency reaches 1000 then Tomcat will have 1000 threads to handle, which is a little too fast for your application.
So how many threads does it take to get a number of tomcat? This is also related to Tomcat's several parameter settings, to see the official explanation is the most reliable:
MaxThreads:
The maximum number of request processing threads to being created by this Connector, which therefore determines the Maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor are associated with this connector, this attribute is ignored as the connector would execute tasks using the Executor rather than an internal thread pool.
MaxConnections:
The maximum number of connections that the server would accept and process at any given time. When this number has been reached, the server would accept, but not process, one further connection. This additional connection is blocked until the number of connections being processed falls below MaxConnections At which point the server would start accepting and processing new connections again. Note that once the limit have been reached, the operating system may still accept connections based on the acceptCount
setting. T He default value varies by connector type. For BIO The default was the value of maxthreads unless an Executor was used in which case the default would be the V Alue of MaxThreads from the executor. For NIO, the default is 10000
. For apr/native, the default is 8192
.
......
Acceptcount:
The maximum queue length for incoming connection requests if all possible request processing threads is in use. Any requests received if the queue is full would be refused. The default value is 100.
Minsparethreads:
The minimum number of threads always kept running. If not specified, the default of is 10
used.
I simply understand that:
The maximum number of threads that the MAXTHREADS:TOMCAT thread pool can afford;
Maxconnections:tomcat requests (connections) that can be processed concurrently;
Acceptcount:tomcat maintain the maximum number of columns;
Minsparethreads:tomcat the thread pool size initialized or the Tomcat thread pool will have at least as many threads.
It is easy to confuse the two parameters of MaxThreads and MaxConnections:
MaxThreads refers to the number of threads that the tomcat thread pool can play, while MaxConnections is the number of concurrent connections that Tomcat can handle in a flash. For example maxthreads=1000,maxconnections=800, assuming a moment of concurrency of 1000, then eventually the number of Tomcat threads will be 800, that is, processing 800 requests at the same time, the remaining 200 into the queue "queued", If acceptcount=100, then 100 requests will be rejected.
Note: According to the foregoing, only the concurrency that moment Tomcat will be 800 threads processing requests, but after stability, a moment may be only a few threads in the runnable state, most of the thread is timed_waiting, if your application processing time fast enough. so the number of threads that really decide which Tomcat is the most likely to reach is maxconnections this parameter and the number of concurrent numbers, and when the number of concurrent numbers exceeds this parameter the request is queued, and the speed of the response depends on your program performance.
The above conclusions are my personal verification and summary, if wrong, kneel for correct!!!
========================== Update (2015-1-20) ===========================
The above conclusions have serious problems, hereby corrected, if misleading some of the students are very sorry.
The main mistakes were concluded:
- Tomcat does not actively shrink the thread pool unless it is determined that no request is made, and Tomcat shrinks the thread pool to the size of the minsparethreads setting;
- The previous version of TOMCAT6 has a maxsparethreads parameter, but has been removed in 7, so Tomcat does not release more than idle threads as long as there is only one request in front of it.
Tomcat will stop idle threads for a long time. Tomcat also has a parameter called maxidletime:
(int) The number of milliseconds before an idle thread shutsdown, unless the number of active threads is less or equal to MINSP Arethreads. Default value is 60000
(1 minute)
In fact, from this parameter interpretation can also see that Tomcat will stop idle for more than a certain time of the thread, this time is maxidletime. But I did not find the thread release phenomenon in my previous test, why? Do I find that the thread pool threads except this parameter are freed? How much is released? It is also related to the current number of requests processed by Tomcat per second, which can be understood as TPS in terms of JMeter or LoadRunner. The following table provides a clear view of the relationship between the number of threads ,TPS and maxidletime :
Tps |
MaxIdleTime (MS) |
Thread Count |
10 |
60,000 |
600 |
5 |
60,000 |
300 |
1 |
60,000 |
60 |
And so on, of course, Thread Count This column is an approximate number, up and down a few, but basically conforms to such a rule:
Thread Count = min (max ((TPS * maxidletime)/1000,minsparethreads), MaxThreads)
Of course, this Thread Count is not less than minsparethreads, which is the same as the previous conclusion. I now bold guess under (look back to the source code verification, or which students know to tell me, thank you):
Each time the Tomcat thread pool takes a thread from the head of the queue to process the request, it ends up at the end of the queue, which means that the two request processing does not use the same thread. A thread is freed when it is idle over maxidletime.
Assuming that the thread pool soared to 1000 during peak times, and that Tomcat processed a request at 1s (from jmeter to see TPs at about 1), it would use 60 threads in the MaxIdleTime default 60s. Finally, the thread pool is theoretically shrunk to 60 (assuming minsparethreads is greater than 60). In addition: this does not matter with the use of keep-alive (the previous test conclusion is because of the use of keep-alive resulting in program performance degradation, TPS reduced a lot of results)
That's why my previous tests, and the number of threads in our production environment, have only increased, because even after the peak, our business has more than 100 requests per second, 100*60=6000, that is, 3,000 threads each thread is definitely reused before being recycled.
So now there's another problem, so why is it that normally 100 requests per second don't cause the number of threads to burst? Which is the bottleneck where the thread burst to 3000? The conclusion I have above is actually not very accurate.
The number of threads that really determine Tomcat's maximum likelihood of reaching is maxconnections this parameter and concurrency number, and when the number of concurrent numbers exceeds this parameter the request is queued, and the speed of the response depends on your program performance.
What is not clear here is the concept of concurrency, no matter what concurrency must have a unit of time (typically 1s), it should be accurate to be at that time Tomcat processing a request for the number of concurrent times, such as when Tomcat processing a request took 1s, So if this 1s comes up with 3000 requests, then the number of threads in Tomcat will be a limit for 3000,maxconnections just for Tomcat.
Welcome to Treatise!
Add:
Using JMeter can easily control the frequency of requests.
Number of concurrent and Tomcat threads