Tomcat and JVM tuning parameters and optimizations

Source: Internet
Author: User
Tags xms jprofiler

1.Tomcat optimization

<executor name= "Tomcatthreadpool" nameprefix= "catalina-exec-" maxthreads= "
        20" minsparethreads= " maxsparethreads= "Maxidletime=" "60000"/> <connector executor= "Tomcatthreadpool"
               port= "8080" Protocol= "http/1.1"
               uriencoding= "UTF-8" connectiontimeout= "30000"
               enablelookups= "false"
               Disableuploadtimeout= "false"
               connectionuploadtimeout= "150000"
               acceptcount= "keepalivetimeout="
               120000 "
               maxkeepaliverequests=" 1 "
               compression=" on "
               compressionminsize=" 2048 "
               Compressablemimetype= "Text/html,text/xml,text/javascript,text/css,text/plain,image/gif,image/jpg,image/png" 
               redirectport= "8443"/>

Maxthreads:tomcat uses threads to handle each request received, which indicates the maximum number of threads that Tomcat can create, and the default value is 200
Minsparethreads: The minimum number of idle threads, the number of initialized threads at Tomcat startup, which means that even if no one uses so many empty threads to wait, the default value is 10.
Maxsparethreads: The maximum number of standby threads, and once the thread is created exceeds this value, Tomcat closes the socket thread that is no longer needed.
The parameters configured above, the maximum thread 500 (general server enough), according to their own actual situation reasonable set, the more the Assembly will consume memory and CPU, because the CPU is tired of thread context switching, no energy to provide request services, the minimum number of idle threads 20, thread maximum idle time 60 seconds, Of course, the maximum number of thread connections allowed is also subject to the operating system kernel parameter settings, set more than the requirements and environment according to their own. Of course, threads can be configured in "Tomcatthreadpool", or they can be configured directly in "Connector", but they cannot be configured repeatedly.
Uriencoding: Specifies the URL encoding format for the Tomcat container, which is not as good as other WEB server software configurations and needs to be specified separately.
Connnectiontimeout: Network connection timeout, unit: milliseconds, set to 0 means never time out, this setting is hidden. Can usually be set to 30000 milliseconds, can be modified according to the actual test.
Enablelookups: Check the domain name to return the host name of the remote host, either True or false and, if set to False, return the IP address directly, set to false to increase processing power.
Disableuploadtimeout: Whether to use the timeout mechanism when uploading.
Connectionuploadtimeout: Upload timeout, after all, file upload may need to consume more time, this according to your own business needs to tune, so that the servlet has a long time to complete its execution, need to work with the previous parameter to be used together will not take effect.
Acceptcount: Specifies that when all available processing requests are used, the maximum queue length of the incoming connection request will not be processed, and the default number is 100.
KeepAliveTimeout: Long Connection Maximum retention time (milliseconds), indicating how long the Tomcat will remain connected until the next request comes, by default, by using ConnectionTimeout time,-1 for unlimited timeout.
Maxkeepaliverequests: Represents the maximum number of requests that this connection supports before the server shuts down. Connections exceeding the number of requests will also be closed, 1 is disabled,-1 is unlimited, the default is 100, and is generally set between 100~200.
Compression: Whether the response data is GZIP compressed, off: to suppress compression; On: to allow compression (text will be compressed), force: All cases are compressed, the default is off, compressed data can effectively reduce the size of the page, generally can be reduced by 1 /about 3, save bandwidth.
Compressionminsize: The minimum value of the compression response, only when the response message size is greater than this value will be compressed, if the compression function, the default value is 2048.
Compressablemimetype: The type of compression that specifies which types of files are compressed to data.
Nocompressionuseragents= "Gozilla, Traviata": Compression is not enabled for the following browsers.
If the code has been separated, static pages and pictures and other data will not need tomcat processing, then there is no need to configure the configuration in Tomcat compression.

2.JVM Tuning Parameters

catalina_opts= "
-server 
-xms6000m 
-xmx6000m 
-xss512k 
-xx:newsize=2250m-xx:maxnewsize= 2250M 
-xx:permsize=128m
-xx:maxpermsize=256m  
-xx:+aggressiveopts 
-xx:+usebiasedlocking 
- XX:+DISABLEEXPLICITGC 
-XX:+USEPARNEWGC 
-xx:+useconcmarksweepgc 
-xx:maxtenuringthreshold=31 
-xx:+cmsparallelremarkenabled 
-xx:+usecmscompactatfullcollection 
-xx:largepagesizeinbytes=128m 
-xx:+usefastaccessormethods 
-xx:+usecmsinitiatingoccupancyonly
-duser.timezone=asia/shanghai 
-djava.awt.headless=true "

JVM limitations on memory in 32-bit systems: You can't break 2GB, so your Tomcat needs to be optimized, and there are no 2GB restrictions on the 64-bit OS, either system memory or the JVM.
For JMX remote monitoring is also set here, the following is configured in a 64-bit system environment.
These configurations can basically be achieved by:
The system response time increases rapidly;
The JVM recovery speed increases without affecting the system response rate;
Maximum JVM memory utilization;
The thread blocking situation is minimized.
JVM commonly used parameters detailed:
-server: Be sure to be the first parameter, good performance on multiple CPUs, there is a model called-client, characterized by faster boot speed, but run-time performance and memory management efficiency is not high, usually for client applications or development debugging, in 32-bit environment directly running Java Programs enable this mode by default. The Server mode is characterized by slow boot speeds, but run-time performance and memory management is efficient for the production environment, which is enabled by default in a 64-bit-capable JDK environment, and can be configured without this parameter.
-XMS: Represents the size of the Java initialization heap,-XMS and-xmx set to the same value, to avoid the JVM repeatedly reapply memory, resulting in performance fluctuations, the default value of 1/64 of physical memory, default (Minheapfreeratio parameters can be adjusted) free heap memory less than 40%, The JVM increases the maximum limit of the heap until-xmx.
-XMX: Represents the maximum Java heap size, the virtual machine prompts for memory overflow when the application needs more memory than the maximum of the heap, and causes the application service to crash, so the general recommendation Heap's maximum value is set to 80% of the maximum available memory. How do I know that my JVM can use the maximum value, use the java-xmx512m-version command to test, and then gradually increase the value of 512, if performed normally indicates that the specified memory size is available, otherwise the error message is printed, the default value is 1 of physical memory/ 4, the default (minheapfreeratio parameter can be adjusted) when the free heap memory is greater than 70%, the JVM reduces the heap until the minimum limit of-XMS.
-XSS: Represents the stack size per Java thread, after JDK 5.0, each thread stack size is 1M, and each thread stack size was 256K before. According to the application of the required memory size of the thread to adjust, in the same physical memory, reduce this value can generate more threads, but the operating system on a process of the number of threads is still limited, can not be unlimited generation, experience value in 3000~5000 or so. General small application, if the stack is not very deep, should be 128k sufficient, large application recommended 256k or 512K, generally not easy to set more than 1M, otherwise prone to out ofmemory. This option has a higher performance impact and requires rigorous testing.
-xx:newsize: Sets the new generation memory size.
-xx:maxnewsize: Set maximum Cenozoic generation memory size
-xx:permsize: Set Persistent generation memory size
-xx:maxpermsize: Set maximum persistent generation memory size, the permanent generation does not belong to heap memory, heap memory contains only Cenozoic and old age.
-xx:+aggressiveopts: function as its name (aggressive), enabling this parameter, your JVM will use the latest added optimization technology (if any) whenever the JDK version is upgraded.
-xx:+usebiasedlocking: Enable an optimized thread lock, we know that in our AppServer, each HTTP request is a thread, some request a short request long, there will be the phenomenon of request queuing, and even a thread blocking, This optimized thread lock allows you to automate the optimal provisioning of threading within your appserver.
-XX:+DISABLEEXPLICITGC: A display call "System.GC ()" is not allowed in the program code. Each time the System.GC () is manually invoked at the end of the operation, the cost of the system's response time is severely reduced, as is the rationale for the interpretation in Xms,xmx, so calling the GC causes the system's JVM to fluctuate wildly.
-XX:+USECONCMARKSWEEPGC: Set old on behalf of concurrent collection, that is, CMS GC, this feature only jdk1.5
Later versions have features that use GC estimation triggers and heap occupancy triggers. We know that frequent GC will create a surface JVM
Ups and downs thus affecting the efficiency of the system, so using the CMS GC can be in the case of increased GC times, each GC response time is very short, for example, the use of CMS
After GC, after Jprofiler observation, the GC is triggered by a lot of times, and each GC takes only a few milliseconds.
-XX:+USEPARNEWGC: The new generation of multi-threaded parallel recycling, so fast, pay attention to the latest JVM version, when the use of-XX:+USECONCMARKSWEEPGC:,-XX:USEPARNEWGC will automatically open. Therefore, if the younger generation of parallel GC does not want to open, it can be switched off by setting-XX:-USEPARNEWGC.
-xx:maxtenuringthreshold: Set the maximum age of rubbish. If set to 0, then the new generation of objects do not go through the Survivor area, directly into the old age. For older years than more applications (requiring a large number of resident memory applications), can improve efficiency. If this value is set to a large value, the Cenozoic object will replicate multiple times in the Survivor area, which can increase the lifetime of the object in the Cenozoic, increase the probability of being recycled in the Cenozoic, and reduce the frequency of the full GC, which can improve service stability in some way. This parameter is only valid when the serial GC, the setting of this value is based on the local Jprofiler monitoring to obtain an ideal value, can not generalize the original move copy.
-xx:+cmsparallelremarkenabled: In the case of using USEPARNEWGC, try to minimize Mark's time.
-xx:+usecmscompactatfullcollection: In the case of using the concurrent GC, prevent memoryfragmention, and organize the live object so that memory fragments are reduced.
-xx:largepagesizeinbytes: Specifies the paging page size of the Java heap, and the size of the memory page cannot be set too large, affecting the size of the Perm.
-xx:+usefastaccessormethods: Use the Get,set method to convert to native code, a quick optimization of the original type.
-xx:+usecmsinitiatingoccupancyonly: Only after oldgeneration has used the proportions of initialization concurrent collector initiates the collection.
-duser.timezone=asia/shanghai: Sets the time zone for the user.
-djava.awt.headless=true: This parameter generally we are put in the last use, this full parameter function is this, sometimes we will use in our Java EE project some chart tools such as: Jfreechart, used in the Web page output gif/jpg And so on, in the WINODWS environment, the general our app server in the output graphics will not encounter any problems, but in the Linux/unix environment often encounter a exception cause you in the WINODWS development environment of the picture displayed well but in the Linux/un IX does not appear, so add this parameter to avoid such a situation.
-XMN: The new generation of memory space size, note: The size here is (eden+ 2 survivor spaces). is different from the New gen shown in Jmap-heap. Whole heap size = Cenozoic size + Generation size + permanent generation size. In the case of keeping the heap size unchanged, the size of the new generation will be reduced when the new generation is enlarged. This value has a significant impact on system performance, and Sun officially recommends configuring the 3/8 of the entire heap.
-xx:cmsinitiatingoccupancyfraction: When the heap is full, the parallel collector starts garbage collection, for example, when there is not enough space to accommodate the newly assigned or promoted object. For CMS collectors, long waits are undesirable because the application continues to run (and allocates objects) during concurrent garbage collection. Therefore, to complete the garbage collection cycle before the application finishes using memory, the CMS collector is started more than the parallel collector. Because different applications have different object allocation patterns, the JVM collects the actual runtime data of the object allocations (and releases) and analyzes the data to determine when to start a CMS garbage collection cycle. This parameter setting has great skill, basically satisfies (xmx-xmn) * (100-cmsinitiatingoccupancyfraction)/100 >= xmn will not appear promotion failed. For example, in the application of XMX is the 6000,XMN is 512, then XMX-XMN is 5488M, that is, the old age has a 5488m,cmsinitiatingoccupancyfraction=90 that the old age to 90% full time to start the implementation of the old age of concurrent garbage Garbage Recycling (CMS), then there are 10% of the space is 5488*10% = 548M, so even if xmn (that is, a total of 512M) in all the objects moved to the old era, 548M space is enough, so as long as the formula above, there will be no garbage collection Promotion failed, so the setting of this parameter must be associated with XMN.
-xx:+cmsincrementalmode: This flag will open the CMS Collector's incremental mode. Incremental mode often pauses the CMS process to make a complete concession to the application thread. Therefore, the collector will take longer to complete the entire collection cycle. Therefore, incremental mode should only be used if the normal CMS cycle is found to be too intrusive for application threads after passing the test. Because modern servers have enough processors to accommodate concurrent garbage collection, this happens very infrequently, and is used for CPU conditions.
-xx:newratio: The ratio of young generation (including Eden and two Survivor districts) to the older generation (excluding persistent generation),-xx:newratio=4 that the ratio of the younger generation to the older generation was 1:4, the young generation accounted for the entire stack of 1/5,XMS=XMX and set the In the case of XMN, this parameter does not need to be set.
The ratio of the size of the-xx:survivorratio:eden area to the Survivor area is set to 8, representing 2 Survivor areas (the default of the JVM heap memory in the younger generation is 2 equal in size Survivor) and the ratio of 1 Eden area is 2:8, That is, 1 Survivor districts account for 1/10 of the total young generation size.
-XX:+USESERIALGC: Sets the serial collector.
-XX:+USEPARALLELGC: Set to parallel collector. This configuration is valid only for young generations. That is, the younger generation uses parallel collections, while older generations still use serial collections.
-XX:+USEPARALLELOLDGC: Configuring the Old Generation garbage collection method for parallel collection, JDK6.0 began to support the parallel collection of older generations.
-xx:concgcthreads: The early JVM version is also called-xx:parallelcmsthreads, which defines the number of threads that the concurrent CMS process runs on. For example, value=4 means that all phases of the CMS cycle are performed with 4 threads. Although more threads will speed up the concurrent CMS process, it can also incur additional synchronization overhead. Therefore, for a particular application, you should test to determine if the increase in the number of CMS threads can really improve performance. If the flag is not set, the JVM calculates the default number of parallel CMS threads based on the value of the-xx:parallelgcthreads parameter in the parallel collector.
-xx:parallelgcthreads: The number of threads that configure the parallel collector, that is, how many threads are garbage collected together, this value suggests that the configuration is equal to the number of CPUs.
-xx:oldsize: Sets the old age memory size for JVM boot allocations, similar to the initial size of the Cenozoic memory
-xx:newsize:
The above is some commonly used configuration parameters, some parameters can be replaced, configuration ideas need to consider the garbage collection mechanism provided by Java. The heap size of the virtual machine determines the time and frequency at which the virtual machine spends collecting garbage. The rate at which garbage collection can be accepted is related to the application and should be adjusted by analyzing the actual time and frequency of garbage collection. If the heap is large, the total garbage collection will be slow, but the frequency will be reduced. If you match the size of the heap with the need for memory, the collection will be complete quickly, but will be more frequent. The purpose of the heap sizing is to minimize the time spent in garbage collection to maximize the processing of customer requests for a specific period of time. In benchmarking, to ensure the best performance, the heap is sized to ensure that garbage collection does not occur throughout the benchmarking process.
If the system spends a lot of time collecting garbage, reduce the heap size. A complete garbage collection should be no more than 3-5 seconds. If garbage collection becomes a bottleneck, you need to specify the size of the generation, check the detailed output of garbage collection, and study the impact of garbage collection parameters on performance. When adding the processor, remember to increase the memory, because the allocation can be carried out in parallel, and garbage collection is not parallel.

3. The common Java memory overflow has the following three kinds

(1) Java.lang.OutOfMemoryError:Java heap Space--jvm heap (heap) overflow

When the JVM starts, it automatically sets the value of the JVM Heap, whose initial space (that is,-XMS) is 1/64 of the physical memory, and the maximum space (-XMX) cannot exceed the physical memory. You can use options such as the-XMN-XMS-XMX provided by the JVM to set it up. The size of the Heap is the sum of young Generation and tenured generaion. This exception information is thrown in the JVM if 98% of the time is for GC and the available Heap size is less than 2%.

Workaround: Manually set the size of the JVM Heap (heap).

(2) Java.lang.OutOfMemoryError:PermGen space--permgen space overflow.

PermGen space is the full name of the permanent Generation spaces, refers to the permanent storage area of memory. Why the memory overflow, this is because this memory is mainly by the JVM storage class and Meta information, class in the Load is placed in the PermGen space area, and it is stored Instance Heap region, Sun GC will not be in the main Program runtime to PermGen space to clean, so if your APP will load a lot of CLASS, it is likely to appear permgen space overflow.

Workaround: Manually set the MaxPermSize size

(3) java.lang.stackoverflowerror--Stack Overflow

Stack overflow, the JVM is still a stack-style virtual machine, and C and Pascal are the same. The call procedure for the function is reflected on the stack and the fallback stack. There are too many "layers" called constructors to overflow the stack area. Generally speaking, the general stack area is much smaller than the heap area, because the function call process is often not more than thousands of layers, and even if each function call requires 1K of space (this is about the equivalent of a C function declared 256 types of variables), then the stack area is only need 1MB space. The size of the stack is usually 1-2mb.
Often recursion does not have too many levels of recursion, it is easy to overflow.

Workaround: Modify the program.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.