Use of Tomcat Java processes for high CPU usage

Source: Internet
Author: User
Tags cpu usage high cpu usage

Using Tomcat as a Java container, the CPU occupies a high reason, the current company server running on the Ubuntu environment Nginx+tomcat+mysql run for some time after the Java process CPU high, will appear the site can not open the situation. So the following analysis is carried out.


First, look at the Tomcat log, and if there is a oom error (memory overflow), you can increase the memory size of the JVM.

1, modify the catalina.sh file in the bin directory under the Tomcat directory,

#JAVA_OPTS = "$JAVA _opts-dorg.apache.catalina.security.securitylistener.umask= ' UMASK '"

This line adds the following content below


Java_opts= "-server-xms2048m-xmx2048m-xmn512m-xx:permsize=256m-xx:maxpermsize=256m-xss256k-xx:survivorratio=4- xx:maxtenuringthreshold=20-xx:+useparnewgc-xx:+useconcmarksweepgc-xx:cmsinitiatingoccupancyfraction=73-xx:+ usecmscompactatfullcollection-xx:+cmsparallelremarkenabled-xx:cmsfullgcsbeforecompaction=2-djava.awt.headless= True


The above configuration is based on 4G memory settings, specific modifications to see the server's configuration on their hands.

The meaning of the parameter:

-server tells Tomcat to use server mode for greater concurrency and performance

Total number of-xms2048m-xmx2048m JVM memory

-xmn512m Young generation Memory size

-xx:permsize=256m-xx:maxpermsize=256m permanently with memory size

xss256k Thread Size

-xx:survivorratio=4 set the ratio of the size of Eden area and survivor area in the younger generation. Set to 4, the ratio of two survivor to an Eden area was 2:4, and a survivor area accounted for 1/6 of the entire young generation.

-xx:maxtenuringthreshold=20 set the maximum age of rubbish. If set to 0, then the younger generation objects do not go through the survivor area, directly into the old generation. More applications for older generations can improve efficiency. If this value is set to a larger value, the young generation object will replicate multiple times in the Survivor area, which can increase the lifetime of the object's younger generation and increase the probability of being recycled in the younger generation.

-XX:+USEPARNEWGC to the younger generation using multithreading parallel recycling, so fast

Xx:+useconcmarksweepgc

CMS GC, which features only jdk1.5, a later version, that uses GC estimation triggers and heap occupancy triggers.

We know that frequent GC will cause the ups and downs of the JVM to affect the efficiency of the system, therefore, using the CMS GC can be in the case of increased GC, each GC response time is very short, such as the use of CMS GC after Jprofiler observation, the GC is triggered a lot of times, And each GC takes only a few milliseconds.

-xx:cmsinitiatingoccupancyfraction=73 shows that when age is 73% full, it begins to perform concurrent garbage collection (CMS) for older generations.

-xx:+usecmscompactatfullcollection Open the compression of the old generation. Performance may be affected, but fragmentation can be eliminated

-xx:+cmsparallelremarkenabled Lower Mark Pause

-xx:cmsfullgcsbeforecompaction because the concurrent collector does not compress and defragment the memory space, it will produce "fragmentation" after running for a period of time, resulting in lower running efficiency. This value sets the number of times the GC is run and then compresses and collates the memory space.


Reference site: http://blog.csdn.net/lifetragedy/article/details/7708724


2, restart Tomcat after modifying the parameters to see if the parameters are valid

Jmap-heap javapid View JVM Memory Allocations

Jstat-gcutil Javapid 1000 30 View JVM Memory Recycle


The above modifications should be able to complete the high performance of the Tomcat process. If you run over a period of time and look at the usage of each memory generation through Jstat, you find that old age 100% has been triggering the full GC that's the reason for memory size. Conditional can add memory, no conditions can be set to reboot once a day tomcat or one weeks restart to release the JVM memory. Solve the problem.


Second, if you want to see if the reason for the code layer, you need another way

1, use the command jstack command to view the highest CPU-consuming threads.

Pid= ' ps aux | grep java | head-1 | awk ' {print $} '

PS-MP $PID-O thread,tid,time | Sort-k 2-r | Head-20


Echo-n-E "Shur ru:"

Read F


If [-Z $f]

Then

echo "No"

Else

Jstack "$PID" | grep ' printf '%x\n ' $f '-A 30

Fi

A simple monitoring script that looks at what the CPU's top thread is doing to analyze whether it is a code tier.

Reference site: http://blog.csdn.net/blade2001/article/details/9065985

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.