1 causes
Recently, the newly developed web system has been tested, and found that the tomcat default configuration down to 600 people's Concurrent login Home page response speed is more serious impact, a round of more than 2000 500 and 502 errors. I have to log the time statistics done, the total server processing time to print out, looked at the found that there are individual responses in 20 seconds, but the average time and LR test is still very far away. Therefore, it can be concluded that not the process of processing spent so much time, due to the LAN test, so can also eliminate network problems. This puts the problem in the Tomcat's request-response capability. The number of Tomcat threads was raised to 1000, and the 500 and 502 errors were reduced to dozens of, but there was little improvement in response time. Later, 2 Tomcat was launched, with Nginx for load balancing, response time down by 40%, and two tomcat lengths to remain at around 1 seconds.
It seems that Tomcat performance is indeed a bottleneck in the system, it is necessary to assume multiple servers to enhance responsiveness. As a result of just testing logins, multiple Tomcat does not have to share the session, but it must work together when it is actually used. Now record the load balancing installation configuration process.
2 choice of solutions
Multiple Tomcat to work together there are several ways to consider the following scenarios:
1. Using the cluster method of Tomcat, multiple tomcat see automatic real-time replication session information, configuration is very simple. But the scheme is less efficient and does not perform well under large concurrency.
2. Using the Nginx hash routing strategy based on access IP to ensure that the IP access is always routed to the same tomcat, this configuration is simpler. However, our application is likely to be a large number of users of a local area network login at the same time, so that the load balance does not work.
3. Using memcached to centrally manage multiple Tomcat sessions, this is the most straightforward solution, but it is also the most complex to operate.
Our system requires both performance and better use of load balancing, so the 3rd option is preferred. The next step is to install the road to build.
3 Installation Configuration installation of 3.1 memcached
1 First download the libevent-1.4.14b-stable.tar.gz and memcached-1.4.7.tar.gz of the source package, the former is the latter's dependency package, is an event-driven package.
2 installation is very smooth, or the classic of the several compile installation commands:
Tar zxvf libevent-1.4.14b-stable.tar.gz cd libevent-1.4.14b-stable./configure--prefix=/usr/local/libevent-1.4.14b Make make install tar zxvf memcached-1.4.7.tar.gz cd memcached-1.4.7./configure--prefix=/usr/local/memcached-1.4.7-- With-libevent=/usr/local/libevent-1.4.14b/make make Install
3) Start memcached:
./bin/memcached-d-M 256-u root-p 11211-c 1024-p/tmp/memcached.pid
3.2 Memcached-session-manager Configuration
Let Tomcat call memcached to store the session has long been a mature solution, open source MSM can solve this problem. Compare toss is to use the jar package, the official document is also relatively vague, I used here is the Kryo serialization scheme, so the use of a number of packages, respectively:
Kryo-1.03.jar
Kryo-serializers-0.8.jar
Memcached-2.5.jar (I see the latest in the official to 2.7, but MSM officials said that with 2.5, the new package may not have been tested, especially in the 2.6 version changelog inside the API has been adjusted, or do not raise the good)
Memcached-session-manager-1.5.1.jar
Memcached-session-manager-tc7-1.5.1.jar
Minlog-1.2.jar
Msm-kryo-serializer-1.5.1.jar
Reflectasm-0.9.jar
All of the above packages are placed in the $catalina_home/lib directory.
Another mention, the official given 4 kinds of serialization scheme, which Kryo is the most efficient, specific comparison to see http://code.google.com/p/memcached-session-manager/wiki/SerializationStrategies.
The next step is to modify the Tomcat configuration file $catalina_home/conf/context.xml and adjust to the new session storage mode. In the configuration file, add the following:
<manager classname= "De.javakaffee.web.msm.MemcachedBackupSessionManager" memcachednodes= "n1:127.0.0.1:11211" Sticky= "false" lockingmode= "Auto" sessionbackupasync= "false" sessionbackuptimeout= "1000" transcoderfactoryclass= " De.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory "/>
Add De.javakaffee.web.msm.level=fine to the $catalina_home/conf/logging.properties file, You can see detailed session access in the Catalina.out log.
In addition, add requesturiignorepattern= to the manager configuration ". *\. (PNG|GIF|JPG|CSS|JS) $ ", with a Chrome browser test found that the SessionID will suddenly change, and then the interceptor to jump back to the page. Get rid of everything normal, but the interceptor will only detect action, it should be completely irrelevant, hope expert advice.
3.3 Nginx Configuration
Nginx is very simple, as long as in the upstream more than a few servers can be configured, here my configuration posted:
#user nobody; Worker_processes 16; events {use Epoll; worker_connections 65535} http {include mime.types; Default_type Application/octet-stream; #log_for Mat Main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $body _bytes_sent "$http _referer" ' # ' "$http _u Ser_agent "" $http _x_forwarded_for ""; #access_log Logs/access.log Main; Client_header_buffer_size 32k; Large_client_header_buffers 4 32k; Client_max_body_size 8m; Client_body_buffer_size 128k; Sendfile on; Tcp_nopush on; #keepalive_timeout 0; Keepalive_timeout 65; gzip on; Gzip_types text/javascript text/plain text/css application/xml application/x-javascript; Gzip_disable "MSIE [1-6]\. (?!. *SV1) "; Proxy_connect_timeout 300; Proxy_send_timeout 300; Proxy_read_timeout 300; Proxy_buffer_size 16k; Proxy_buffers 4 32k; Proxy_set_header x-forwarded-for $remote _addr; Proxy_set_header Connection Close; Server_names_hash_max_size 1024; Server_names_hash_bucket_size 1024; # Default cache parameters for use by virtual hostS # set the cache path to TMPFS mounted disk, and the zone name # Set the maximum size of the ' on ' disk cache to less than t He tmpfs file system size Proxy_cache_path./cache levels=1:2 keys_zone=pscms:100m max_size=800m; Proxy_temp_path./proxy; #配置后端服务器信息 upstream Web_server {#ip_hash; server localhost:8080 max_fails=3 fail_timeout=30s; server localhost:8180 max_ Fails=3 fail_timeout=30s; server {Listen 8888 # listen for IPv4 #listen [::]:80 default ipv6only=on; # # listen for IPv6 server_name localhost; CharSet Utf-8; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _referer "" "$htt P_user_agent "" $http _x_forwarded_for ""; Access_log Logs/host.access.log Main; #access_log off; Location ~. *\. (jsp|action) $ {proxy_set_header Host $http _host; proxy_redirect off; Proxy_pass Http://web_server; proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; } locatIon ~. *\. (GIF|JPG|JPEG|PNG|BMP|SWF|JS|CSS) $ {#如果后端的服务器返回502, 504, execution timeout error, automatically forwards the request to another server in the upstream load balancing pool for failover. Proxy_next_upstream http_502 http_504 error timeout invalid_header; Proxy_cache Pscms; #进行缓存, use Web cache cache_one proxy_cache_valid 304 1h; #对不同的HTTP状态码设置不同的缓存时间 proxy_cache_valid 302 5m; Proxy_cache_valid any 1m; Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $remote _addr; Proxy_set_header accept-encoding ""; # (or the background server to turn off gzip), so this machine will not cache compressed files, resulting in garbled proxy_ignore_headers "Cache-control" "Expires"; #这段配置加上后, Proxy_cache can support background-setting expires. Proxy_pass Http://web_server; Expires 15m; } location/{Proxy_set_header host $http _host; proxy_redirect off; Proxy_pass Http://web_server; Proxy_set_header host $ Host Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; } } }