Nginx Server Load balancer configuration (http Proxy)
Nginx is a performance-oriented HTTP server. Compared with Apache and lighttpd, Nginx has the advantage of low memory usage and high stability. Unlike Apache of the old version (<= 2.2), nginx does not use the design model of one thread per customer, but fully uses asynchronous logic to reduce the context scheduling overhead, therefore, the concurrent service capability is stronger. Modular design is adopted as a whole, with rich module libraries and third-party module libraries, and flexible configuration. In a Linux operating system, nginx uses the epoll event model. Thanks to this, nginx is highly efficient in a Linux operating system. At the same time, Nginx uses an efficient event model kqueue similar to epoll on the OpenBSD or FreeBSD operating system. Nginx is also a high-performance HTTP and reverse proxy server and an IMAP/POP3/SMTP proxy server. Nginx is well known for its stability, rich feature sets, sample configuration files, and low system resource consumption.
My topic today is mainly the Nginx Server Load balancer experiment. I will record the steps and take them as study notes. I can also give you some reference.
1. experiment environment
System Version: CentOS release 5.9 (Final) x86 32-bit
Nginx version: 1.2.8
Nginx Load Balancing location: Port 125.208.14.177 80
Web1 125.208.12.56: Port 80
Web2 218.78.186.162: Port 8090
Web3 125.208.14.177: port 8080
Here, I use the apache that comes with the system on web_1 and web_2. it is OK to change the listening port as required. Of course, nginx can also be installed. You can do this on your own, I installed nginx on 125.208.14.177 and used it as the Server Load balancer and web server. The Server Load balancer uses port 80 and the web Service uses port 8080.
2: Configuration File
[Root @ host-192-168-2-177 conf] # more nginx. conf
Worker_processes 1;
Events {
Worker_connections 1024;
}
Http {
Upstream site {
Server 125.208.12.56: 80;
Server 218.78.186.162: 8090;
Server 125.208.14.177: 8080;
}
Include mime. types;
Default_type application/octet-stream;
Sendfile on;
Keepalive_timeout 65;
Server {
Listen 80;
Server_name localhost;
Location /{
Proxy_pass http: // site;
Root/var/www/html;
Index. php;
}
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
}
Location ~ \. Php $ {
Root/var/www/html;
Fastcgi_pass 127.0.0.1: 9000;
Fastcgi_index index. php;
Fastcgi_param SCRIPT_FILENAME $ document_root $ fastcgi_script_name;
Include fastcgi_params;
}
}
Server {
Listen 8080;
Server_name localhost2;
Location /{
Root/var/www/html2;
Index. php;
}
Error_page 500 502 503 x.html;
Location =/50x.html {
Root html;
}
Location ~ \. Php $ {
Root/var/www/html2;
Fastcgi_pass 127.0.0.1: 9000;
Fastcgi_index index. php;
Fastcgi_param SCRIPT_FILENAME $ document_root $ fastcgi_script_name;
Include fastcgi_params;
}
}
}
3: Test
[Root @ host-192-168-2-177 conf] # curl 125.208.14.177
404 Not Found
404 Not Found
--------------------------------------------------------------------------------
Nginx
[Root @ host-192-168-2-177 conf] # curl 125.208.14.177
1234
[Root @ host-192-168-2-177 conf] # curl 125.208.14.177
This dir is/var/www/html2
--- Successful, round-robin access
For more Nginx Server Load balancer configuration tutorials, see the following:
Configuration points of Nginx + Tomcat load balancing and dynamic/static separation in Linux
Docker + Nginx + Tomcat 7 Simple Server Load balancer Configuration
Nginx Load Balancing (Master/Slave) + Keepalived
Use Nginx as the Server Load balancer
Load Balancing of three virtual machines with Nginx in CentOS Environment
Nginx reverse proxy load balancing cluster practice
Nginx details: click here
Nginx: click here
This article permanently updates the link address: