On-line environment we are using Tomcat as a Web server, its processing performance directly related to the user experience, in peacetime work and study, summed up the following seven kinds of tuning experience.
1. Server resources
The ability of the server to provide CPU, memory, and hard disk performance has a decisive impact on processing power.
(1) For high concurrency situations there will be a large number of operations, then the CPU speed will directly affect the processing speed.
(2) There is a large amount of data processing in the case, there will be a large memory capacity requirements, can be used-xmx-xms-xx:maxpermsize and other parameters to partition the memory of different function blocks. We have experienced a lack of memory allocation before, causing the virtual machine to remain in full GC, resulting in a significant decrease in processing power.
(3) The main problem of hard disk is read and write performance, when a large number of files read and write, disk is very easy to become a performance bottleneck. The best approach is to use the cache mentioned below.
2. Using caching and compression
For static pages It is best to be able to cache it so that you do not have to read from disk every time. Here we use Nginx as a cache server, the image, CSS, JS files are cached, effectively reducing the back-end tomcat access.
In addition, gzip compression is also essential in order to speed up network transmission. But given that Tomcat has to deal with a lot of things, so the compression of the work to the front of the nginx to complete. You can refer to the previous written "Using Nginx to accelerate Web Access."
In addition to the text can be compressed with gzip, in fact, many pictures can also be pre-compressed with image processing tools, find a balance point can make the picture loss is very small and the file can be reduced a lot. I've seen a picture from more than 300 KB compressed to dozens of KB, I can hardly see the difference.
3. Using the cluster
Single server performance is always limited, the best way is to achieve scale-out, then the formation of the Tomcat cluster is an effective means of improving performance. We still use Nginx as a server for the request shunt, the backend multiple Tomcat sharing session to work together. You can refer to the previous written "Building Web server load Balancing with nginx+tomcat+memcached".
4. Optimize TOMCAT parameters
Here, as an example of TOMCAT7 parameter configuration, the Conf/server.xml file needs to be modified, mainly to optimize the connection configuration and close the client DNS query.
<connector port= "8080" protocol= "Org.apache.coyote.http11.Http11NioProtocol" connectiontimeout= "20000" redirectport= "8443" maxthreads= " " minsparethreads= " " acceptcount= " " Disableuploadtimeout= "true" Enablelookups= "false" uriencoding= "UTF-8" &NBSP;/>
5. Use Apr Library
The bio model used by Tomcat by default has a significant degradation in performance under hundreds of concurrency. Tomcat comes with a model of NIO, and you can also invoke the APR library to achieve OS level control.
The NIO model is built-in and is easy to invoke, requiring only the protocol in the above configuration file to be modified to Org.apache.coyote.http11.Http11NioProtocol, and the restart will take effect. The above configuration I have changed, the default is http/1.1.
Apr requires the installation of third-party libraries, which can significantly improve performance at high concurrency. Specific installation methods can refer to http://www.cnblogs.com/huangjingzhou/articles/2097241.html. Reboot will take effect when the installation is complete. If you use the default protocal is APR, but it is better to change the protocol to Org.apache.coyote.http11.Http11AprProtocol, will be more explicit.
In the official Find a table detailing the differences in these three ways:
java blocking connector java nio blocking connector APR/native Connector bio NIO APR Classname AjpProtocol AjpNioProtocol ajpaprprotocol tomcat version 3.x onwards 7.x onwards 5.5.x onwards Support Polling NO YES YES Polling Size N/A maxconnections maxconnections read request headers blocking Sim blocking Blocking Read Request Body blocking Sim Blocking Blocking Write Response Blocking sim  blocking Blocking Wait for next Request Blocking non blocking non Blocking Max Connections maxConnections maxConnections maxconnections
6. Optimize your network
Joel has also made it clear that optimizing NIC drivers can improve performance, which is especially important when working in a clustered environment. Since we have a Linux server, optimizing kernel parameters is also a very important task. Give a reference to the optimization parameters:
1. Modify the/etc/sysctl.cnf file and append the following at the end: Net.core.netdev_max_backlog = 32768 Net.core.somaxconn = 32768 Net.core.wmem_default = 8388608 Net.core.rmem_default = 8388608 Net.core.rmem_max = 16777216 Net.core.wmem_max = 16777216 Net.ipv4.ip_local_por T_range = 65000 Net.ipv4.route.gc_timeout = Net.ipv4.tcp_fin_timeout = + Net.ipv4.tcp_keepalive_time = net. Ipv4.tcp_timestamps = 0 Net.ipv4.tcp_synack_retries = 2 Net.ipv4.tcp_syn_retries = 2 Net.ipv4.tcp_tw_recycle = 1 Net.ipv4. Tcp_tw_reuse = 1 Net.ipv4.tcp_mem = 94500000 915000000 927000000 Net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_ Backlog = 65536 2. Save exit, execute sysctl-p effective
7. Let the test speak
The most taboo is to optimize the system is not testing, sometimes inappropriate optimization will make performance lower. All of the above optimization methods have to be tested locally after the performance test and then constantly adjust the parameters, so that the ultimate optimization results can be achieved.
Added test results for bio, Nio, Apr modes:
For these modes, I used the AB command to simulate 1000 concurrent test 10000 words, the test results are quite unexpected, in order to confirm the results, I have repeatedly tested more than 10 times, and on two servers have been tested again. The results showed that the difference between bio and NiO was very weak, no wonder it was bio by default. However, with APR, the speed of connection establishment will be 50%~100%. Calling the operating system layer directly is very fast, it is highly recommended Apr way!
Resources:
Http://16.199.geisvps.com/bbs/2836/24238.html
Creator: http://passover.blog.51cto.com/2431658/732629
PassoverThe blog
Seven lessons to improve Tomcat server performance