What is a tomcat cluster?
 
Requests are distributed using nginx to distribute requests to different Tomcat servers for processing. This reduces the load of each Tomcat server and increases the server response speed.
 
Target
 
Tomcat cluster that implements high-performance load balancing.
 
Tools
 
Nginx-1.13.10
 
Apache -- tomcat-7.0.81
 
Steps
 
1. Download nginx.
 
2. decompress the two Tomcat files and name them APACHE-Tomcat-7.0.81-1 and APACHE-Tomcat-7.0.81-2 respectively.
 
3. Modify the startup ports of the two Tomcat servers to 8080 and 8181 respectively.
 
4. Modify the default index. jsp page of two Tomcat servers to differentiate them.
 
5. Start two Tomcat servers at the same time to access the test.
 
6. Configure nginx and enable the nginx-1.13.10/CONF/nginx. conf.
 
Configure as follows:
 
Worker_processes 1; # Number of worker processes, which is generally the same as the number of CPU cores of a computer
Events {
Worker_connections 1024; # maximum number of connections to a single process (maximum number of connections = number of connections * Number of processes)
}
HTTP {
Include mime. types; # file extension and file type ing table
Default_type application/octet-stream; # default file type
Sendfile on; # enable the efficient file transfer mode. Set the normal application to on. If the application disk is used for downloading and other I/O heavy load applications, set it to off.
Keepalive_timeout 65; # long connection timeout, in seconds
Gzip on; # enable gizp Compression
# Tomcat Cluster
Upstream MyApp {# Tomcat cluster name
Server localhost: 8080; # tomcat1 Configuration
Server localhost: 8181; # tomcat2 Configuration
}
# Nginx Configuration
Server {
Listen 9090; # listening port, 80 by default
SERVER_NAME localhost; # current nginx Domain Name
Location /{
Proxy_pass http: // MyApp;
Proxy_redirect default;
}
Error_page 500 502 503 x.html;
Location =/50x.html {
Root HTML;
}
}
}
 
Core Configuration:
 
7. Run the doscommand to start nginx.
 
8. Test and access http: // localhost: 9090.
 
So far, we have implemented a load balancing Tomcat cluster using nginx.
 
Nginx load balancing policy:
 
1. Round Robin (default)
 
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
 
Upstream backserver {
 
Server 192.168.0.14;
 
Server 192.168.0.15;
 
}
 
2. Specify the weight
 
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
 
Upstream backserver {
 
Server 192.168.0.14 Weight = 10;
 
Server 192.168.0.15 Weight = 10;
 
}
 
3. bind an IP address to ip_hash
 
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
 
Upstream backserver {
 
Ip_hash;
 
Server 192.168.0.14: 88;
 
Server 192.168.0.15: 80;
 
}
 
4. Fair (third party)
 
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
 
Upstream backserver {
 
Server server1;
 
Server server2;
 
Fair;
 
}
 
5. url_hash (third-party)
 
Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when it is cached.
 
Upstream backserver {
 
Server squid1: 3128;
 
Server squid2: 3128;
 
Hash $ request_uri;
 
Hash_method CRC32;
 
}
 
What is a tomcat cluster?