Tomcat concurrency and number of threads _JVM

Source: Internet
Author: User

Http://www.cnblogs.com/zhanjindong/p/concurrent-and-tomcat-threads-updated.html


There are serious errors in the first half of this article, please see the last 2015-1-20 Update section.

Recently has been solving a problem on the line, the performance is:

Tomcat will have a peak every morning, the peak of the concurrent reached more than 3000, the final result is the Tomcat thread pool full, log see a lot of requests over 1s.

Server performance is good, the Tomcat version is 7.0.54, configured as follows:

<executor name= "Tomcatthreadpool" nameprefix= "catalina-exec-" maxthreads= "3000" minsparethreads= "a"/>

    <connector executor= "Tomcatthreadpool" port= "8084" protocol= "Org.apache.coyote.http11.Http11AprProtocol"
               connectiontimeout= "60000"
               keepalivetimeout= "30000"
               maxkeepaliverequests= "8000
               " Maxhttpheadersize= "8192"
               uriencoding= "UTF-8"
               enablelookups= "false"
               acceptcount= "1000
               " Disableuploadtimeout= "true"
               redirectport= "8443"/>

After the thread dump look at the actual runnable state of the threads are very few, most of the threads are in the timed_waiting state:

So everyone started to wonder why the thread was going up to 3000 and found that the number of threads would not fall even if the peak was over.

The first thing we think of is:

The processing of the backend application is slow, "blocking" causes the number of front-end threads to rise.

But the optimization of a version of the online found that although the rise of the situation has improved, but the final thread pool will still reach the maximum value of 3000.

================================== Split Line =========================================

The above is a large background, the middle of the process is omitted, directly to tell you the current I get the conclusion:

1, the first is why the thread does not release the problem.

In short, I've verified that the tomcat (7.0.54) thread pool probably works when Tomcat starts without a request, the number of threads (all refers to the thread pool) is 0, and when there is a request, Tomcat initializes the number of threads minsaprethreads set; Tomcat does not actively shrink the thread pool, unless it is determined that Tomcat will shrink the thread pool to the size of the Minsparethreads setting when no request is made; Tomcat6 Previous version has a maxsparethreads parameter. However, it has been removed in 7, so as long as there is only one request in front, Tomcat will not free more than idle threads.

As for why Tomcat removed maxsparethreads This parameter, I think also for the sake of performance, the continuous shrinking thread pool performance is certainly not high, and the advantage of redundant threads in the waiting state is a new request come in immediately can be dealt with. And a large number of Tomcat threads in the waiting state will not consume the CPU, but will consume some JVM storage.

Add: There is a problem with the red one, and further validation finds that this is only the case when using keep-alive (both client and server support), and if the client does not use keep-alive then the thread is recycled as the TCP connection is freed.

Keep-alive related parameters in Tomcat:

Maxkeepaliverequests:

The maximum number of HTTP requests which can pipelined until the connection is closed by the server. Setting this attribute to 1 would disable http/1.0 keep-alive, as as the http/1.1 keep-alive and pipelining. Setting This to-1 would allow an unlimited amount to pipelined or keep-alive HTTP requests. If not specified, the This attribute was set to 100.

KeepAliveTimeout:

The number of milliseconds this Connector would wait for another HTTP request before closing the connection. The default value is to use the value of has been set for the ConnectionTimeout attribute. Use a value of-1 to indicate no (i.e. infinite) timeout.

2, why the thread pool will be full.

This is the core of my current tangle. In the end is not the application of slow performance caused, I now conclude that there is a relationship, but the key is concurrency. The number of threads in Tomcat's thread pool is related to your instantaneous concurrency, such as MaxThreads set to 1000, and when the instant concurrency reaches 1000 then Tomcat will have 1000 threads to handle, which is not related to the speed of your application.

So how many threads will it take to be a concurrent number of Tomcat? This also has to do with the several parameters set by Tomcat, the official explanation is the most reliable:

MaxThreads:

The maximum number of request processing threads to is created by this Connector, which therefore determines the maximum n Umber of simultaneous requests can be handled. If not specified, the This attribute was set to 200. If an executor was associated with this connector, this attribute is ignored as the connector would execute tasks using the Executor rather than an internal thread pool.

MaxConnections:

The maximum number of connections that the server would accept and process at any given time. When this number has been reached, the server would accept, but not process, one further connection. This additional connection is blocked until the number of connections being processed falls below at MaxConnections Point the server would start accepting and processing new connections again. Note that once the limit has been reached, the operating system could still accept connections based on the Acceptcount Ing. The default value varies by connector type. For BIO The default is the value of maxthreads unless a Executor is used into which case the default would be the value of M Axthreads from the executor. For NIO, the default is 10000. For apr/native, the default is 8192.

......

Acceptcount:

The maximum queue length for incoming connection requests as all possible request processing threads. Any requests received when the ' queue is the ' full would be refused. The default value is 100.

Minsparethreads:

The minimum number of threads always kept running. If not specified, the default is used.

My simple understanding is:

The maximum number of threads that can be maxthreads:tomcat the thread pool;

Maxconnections:tomcat requests (connections) that can be processed concurrently;

Acceptcount:tomcat maintain the largest number of columns;

Minsparethreads:tomcat the size of the thread pool initialized or the Tomcat thread pool will have at least so many threads.

It is easy to mix the two parameters of MaxThreads and MaxConnections:

MaxThreads refers to the number of threads that the tomcat thread pool can do long, while MaxConnections is the number of concurrent connections that Tomcat can handle for an instant. For example, maxthreads=1000,maxconnections=800, assuming that a moment of concurrency is 1000, then eventually Tomcat will be 800 of the number of threads, that is, processing 800 requests, the remaining 200 into the queue "queue", If acceptcount=100, then 100 requests will be rejected.
Note: According to the foregoing, the only time that Tomcat can handle the request is 800 threads, but when it is stable, a few threads may be in the runnable state, and most threads are timed_waiting, if your application is working fast enough. So the real decision to tomcat maximum possible number of threads is maxconnections this parameter and concurrent number, when the number of concurrent more than this parameter, the request will be queued, then the response speed depends on the performance of your program.

The above conclusions are my personal verification and summary, if there is no, kneeling to correct ...

========================== Update (2015-1-20) ===========================

The above conclusions have serious problems, we hereby correct, if misled some students are very sorry.

The main error is that Tomcat does not actively shrink the thread pool, unless it is determined that Tomcat will shrink the thread pool to the size of the Minsparethreads setting when no request is made; Tomcat6 Previous version has a maxsparethreads parameter , but has been removed in 7, so as long as there is only one request in front, Tomcat will not release more threads than is idle.

Tomcat will stop idle threads for a long time. Tomcat also has a parameter called MaxIdleTime:

(int) The number of milliseconds before a idle thread shutsdown, unless the number of active threads are or less to equal Arethreads. Default value is 60000 (1 minute)

In fact, from this parameter explanation can also see that Tomcat will stop idle for more than a certain time of the thread, this time is maxidletime. But I did not find the thread release phenomenon in my previous tests. I found that except this parameter thread pool thread is released. How much is released. It is also related to the current number of requests processed by Tomcat per second (which can be understood as TPS from JMeter or LoadRunner). The relationship between the number of threads, TPS and maxidletime can be clearly seen in the following table: TPs MaxIdleTime (ms) thread count 10 60,000 600 5 60,000 300 1 60,000 60

And so on, of course, the thread count is an approximate number, up and down, but it basically conforms to one rule:

Thread Count = min (max (TPS * maxidletime)/1000,minsparethreads), MaxThreads)

Of course, this thread count is not less than minsparethreads, and this is the same as the previous conclusion. I now take a bold guess (look back at the source code verification, or the students know to tell me, thank you):

The Tomcat thread pool takes a thread from the head of the queue to process the request at the end of the queue, which means that the two request processing does not use the same thread. A thread is freed when it is idle over maxidletime.

Assuming that the thread pool jumps to 1000 at peak times, after the peak, when tomcat requests 1s (from jmeter to 1), then 60 threads in the thread pool will be used in the MaxIdleTime default 60s. Then finally the theoretical thread pool shrinks to 60 (assuming minsparethreads is greater than 60). In addition: this does not have to use keep-alive does not matter (before the test conclusion is because the use of keep-alive caused the program performance degradation, TPS reduced a lot of results)

That's why my previous tests, and the number of threads in our production environment, have only increased, because even after the peak, our business is still more than 100 requests per second, 100*60=6000, or 3,000 thread each thread will be reused before being recycled.

So now there's another question, so why is it normally that 100 of requests per second do not cause the number of threads to surge? In other words, the thread burst to 3000 of the bottleneck in the end. The conclusions I have above are not very accurate.

The number of threads that really determine Tomcat's maximum potential is maxconnections this parameter and concurrent number, when the number of concurrent more than this parameter will be queued, then the response speed depends on your program performance.

What is not clear here is the concept of concurrency, no matter what concurrency must have a unit of time (typically 1s), and it should be precisely when Tomcat processes a request for a concurrent number of times, such as when Tomcat processing a request cost 1s, So if the number of requests from this 1s reaches 3000, then the number of Tomcat threads will be a limit for 3000,maxconnections only for Tomcat.

Welcome to treatise.

Add:

Using JMeter makes it easy to control the frequency of requests.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.