1 cause
Recently, a stress test was conducted on the newly developed web system, and it was found that the response speed of concurrent logon home pages under tomcat's default configuration pressure to 600 people was severely affected, more than 2000 errors of 500 and 502 occur in one round. I made a look at the logon time statistics and printed out the total server processing time. I saw that some responses were indeed within 20 seconds, however, the average time is far from the lr test. Therefore, it can be concluded that it does not take so much time to process the program. Due to the LAN testing, the network problem can also be ruled out. This outlines the problem in the tomcat request response capability. First, increase the number of tomcat threads to 1000, and find that the number of errors 500 and 502 has dropped to dozens, but the response time has not improved yet. Later, two tomcat servers were started, and nginx was used for load balancing. The response time dropped by 40%. The processing duration of the two tomcat servers was kept at about 1 second.
It seems that tomcat performance is indeed a bottleneck of the system. It is necessary to assume that multiple servers are used to enhance the response capability. Previously, because it was just a test logon, multiple tomcat servers do not need to share sessions, but they must work together in real use. Record the installation and configuration process of Server Load balancer.
2. Solution Selection
There are several ways to work together with multiple tomcat servers. The following solutions can be considered:
1. Use the cluster method that comes with tomcat. For more information about tomcat, see automatically copy session information in real time. It is easy to configure. However, the efficiency of this solution is relatively low, and the performance is not good under high concurrency.
2. Use nginx's access ip-based hash routing policy to ensure that the accessed ip address is always routed to the same tomcat. This configuration is simpler. However, our application is probably because a large number of users log on to the local area network at the same time, So load balancing will be useless.
3. memcached is used to centrally manage sessions of multiple tomcat servers. This is the most direct solution, but the operation is also the most complicated.
Our system requires both performance and better use of Server Load balancer, so 3rd solutions are the first choice. The next step is the path to installation and setup.
3. install and configure
3.1 install memcached
1. Download the source package of libevent-1.4.14b-stable.tar.gzand memcached-1.4.7.tar.gz. The former is the dependency package of the latter and is an event-driven package.
2) the installation is very smooth, or the classic compilation and installation commands:
Tar zxvf libevent-1.4.14b-stable.tar.gz
Cd libevent-1.4.14b-stable
../Configure -- prefix =/usr/local/libevent-1.4.14b
Make
Make install
Tar zxvf memcached-1.4.7.tar.gz
Cd memcached-1.4.7
./Configure -- prefix =/usr/local/memcached-1.4.7 -- with-libevent =/usr/local/libevent-1.4.14b/
Make
Make install
3) Start memcached:
./Bin/memcached-d-m 256-u root-p 11211-c 1024-P/tmp/memcached. pid
3.2 memcached-session-manager Configuration
It is already a mature solution for tomcat to call memcached to store sessions. The open-source msm can solve this problem. The jar package is used, which is vague in the official documentation. Here I use the kryo serialization solution, so I use more packages:
Kryo-1.03.jar
Kryo-serializers-0.8.jar
Memcached-2.5.jar (I officially see the latest has reached 2.7, but the msm official said with 2.5, may not test the new package, especially the version of changelog 2.6 mentioned in the api has been adjusted, or don't upgrade it)
Memcached-session-manager-1.5.1.jar
Memcached-session-manager-tc7-1.5.1.jar
Minlog-1.2.jar
Msm-kryo-serializer-1.5.1.jar
Reflectasm-0.9.jar
These packages are all stored in the $ CATALINA_HOME/lib directory.
In addition, the four serialization schemes officially provided, among which kryo is the most efficient
Next, modify the configuration file $ CATALINA_HOME/conf/context. xml of tomcat and change it to the new session storage method. Add the following content to the configuration file:
<Manager className = "de. javakaffee. web. msm. MemcachedBackupSessionManager"
MemcachedNodes = "n1: 127.0.0.1: 11211"
Sticky = "false"
LockingMode = "auto"
SessionBackupAsync = "false"
SessionBackupTimeout = "1000"
TranscoderFactoryClass = "de. javakaffee. web. msm. serializer. kryo. KryoTranscoderFactory"
/>
Add de. javakaffee. web. msm. level = FINE to the $ CATALINA_HOME/conf/logging. properties file to view detailed session access information in the catalina. out log.
In addition, add requestUriIgnorePattern = "in the Manager configuration ". *\. (png | gif | jpg | css | js) $ ". In chrome browser test, the sessionID is suddenly changed, and then the interceptor jumps back to the homepage. If it is removed, everything will be normal, but the interceptor will only detect the action. It should be reasonable to say that it doesn't matter at all. Hope you can give me some advice!
3.3 nginx Configuration
Nginx is very simple. You only need to configure several more servers in upstream. Here I will post my Configuration:
# User nobody;
Worker_processes 16;
Events {
Use epoll;
Worker_connections 65535;
}
Http {
Include mime. types;
Default_type application/octet-stream;
# Log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request "'
# '$ Status $ body_bytes_sent "$ http_referer "'
# '"$ Http_user_agent" "$ http_x_forwarded_for "';
# Access_log logs/access. log main;
Client_header_buffer_size 32 k;
Large_client_header_buffers 4 32 k;
Client_max_body_size 8 m;
Client_body_buffer_size 128 k;
Sendfile on;
Tcp_nopush on;
# Keepalive_timeout 0;
Keepalive_timeout 65;
Gzip on;
Gzip_types text/plain application/xml application/x-javascript;
Gzip_disable "MSIE [1-6] \. (?!. * SV1 )";
Proxy_connect_timeout 300;
Proxy_send_timeouts 300;
Proxy_read_timeout 300;
Proxy_buffer_size 16 k;
Proxy_buffers 4 32 k;
Proxy_set_header X-Forwarded-For $ remote_addr;
Proxy_set_header Connection Close;
Server_names_hash_max_size 1024;
Server_names_hash_bucket_size 1024;
# Default cache parameters for use by virtual hosts
# Set the cache path to tmpfs mounted disk, and the zone name
# Set the maximum size of the on disk cache to less than the tmpfs file system size
Proxy_cache_path./cache levels = keys_zone = pscms: 100 m max_size = 800 m;
Proxy_temp_path./proxy;
# Configure backend server information
Upstream web_server {
# Ip_hash;
Server localhost: 8080 max_fails = 3 fail_timeout = 30 s;
Server localhost: 8180 max_fails = 3 fail_timeout = 30 s;
}
Server {
Listen 8888; # listen for ipv4
# Listen [:]: 80 default ipv6only = on; # listen for ipv6
Server_name localhost;
Charset UTF-8;
Log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request "'
'$ Status $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" "$ http_x_forwarded_for "';
Access_log logs/host. access. log main;
# Access_log off;
Location ~ . * \. (Jsp | action )? $ {
Proxy_set_header Host $ http_host;
Proxy_redirect off;
Proxy_pass http: // web_server;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
}
Location ~ . * \. (Gif | jpg | jpeg | png | bmp | swf | js | css) $ {
# If the backend server returns errors such as 502, 504, and execution timeout, the request is automatically forwarded to another server in the upstream Server Load balancer pool for failover.
Proxy_next_upstream http_502 http_504 error timeout invalid_header;
Proxy_cache pscms; # cache, using the Web cache zone cache_one
Proxy_cache_valid 200 304 1 h; # set different cache times for different HTTP Status Codes
Proxy_cache_valid 301 302 5 m;
Proxy_cache_valid any 1 m;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ remote_addr;
Proxy_set_header Accept-Encoding "; # (or the backend server closes gzip), so that this machine will not cache compressed files, causing garbled characters
Proxy_ignore_headers "Cache-Control" "Expires"; # after this configuration is added, proxy_cache supports the expires set in the background.
Proxy_pass http: // web_server;
Expires 15 m;
}
Location /{
Proxy_set_header Host $ http_host;
Proxy_redirect off;
Proxy_pass http: // web_server;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
}
}
}
This article is from "small ocean Enterprises"