Example of Nginx load balancing and round robin load balancing in CentOS6

Source: Internet
Author: User
Tags failover nginx server nginx load balancing

Method 1: Load balancing using nginx round robin

Prepare three servers, or open a virtual machine! I implement it on a virtual machine.
The ip addresses are 192.168.1.10 192.168.1.11 192.168.1.12 (ngixn is installed in the environment without any configuration)
The three server environments are the best. I cloned them directly from the vm. The environment is definitely the same! Not the same. I guess I have encountered many strange problems and have never tried them.

192.168.1.10 is used as the server load balancer server (server load balancer will be configured here later, and the other two servers do not need to be configured)
    
First, let's take a look at some common load knowledge.
  
Currently, nginx upstream supports four allocation methods.
1) Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
2) weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
2) ip_hash
Each request is allocated according to the hash result of the access ip address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
3) fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
4), url_hash (third-party)

You can start the configuration. To open nginx. conf of 192.168.1.10, you only need to add the following code in the configuration file:

Upstream netkou {
Server 192.168.1.11: 80;
Server 192.168.1.12: 80;

 
Server {
Listen 80;
Server_name www.111cn.net;
Location /{
Proxy_pass http://www.111cn.net;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
  } 
Access_log logs/access_log;
Error_log logs/error_log;

Practice:

Www.111cn.net I changed the host to 192.168.1.10
I modified the html on the ngixn development page.
Vi/usr/local/nginx/html/index.html
Add index.html to display the ip address of the local machine.
Both 192.168.1.11 and 192.168.1.12 must be modified to make the test obvious.
In my local browser, enter www.111cn.net
Every time you refresh the page, you will jump to the page on different servers (of course, this is to be obvious, so we use the round robin method. You can modify it as needed)

Method 2: configure an instance for Nginx server load balancer


The following uses the reverse proxy function of Nginx to configure an Nginx load balancing server. The backend has three service nodes for providing Web services, and load balancing of the three nodes is achieved through Nginx scheduling.
/Etc/nginx/conf. d/default. conf


Upstream myserver {
Server 192.168.12.181: 80 weight = 3 max_fails = 3 fail_timeout = 20 s;
Server 192.168.12.182: 80 weight = 1 max_fails = 3 fail_timeout = 20 s;
Server 192.168.12.183: 80 weight = 4 max_fails = 3 fail_timeout = 20 s;
}

Server
{
Listen 80;
Server_name www.admin130.cn 192.168.12.189;
Index index.htm index.html;
Root/ixdba/web/wwwroot;

Location /{
Proxy_pass http: // myserver;
Proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header;
Include/opt/nginx/conf/proxy. conf;
}
}


In the above configuration instance, a server load balancer group myserver is defined first, and then the "proxy_pass http: // myserver" function is implemented in the location section, the proxy_pass command specifies the proxy's backend server address and port. The address can be the host name or IP address, or the server load balancer group name set through the upstream command. Proxy_next_upstream is used to define a failover policy. When the backend service node returns errors such as 500, 502, 503, 504, and execution timeout, the request is automatically forwarded to another server in the upstream load balancing group, implement failover. Finally, the include command contains a proxy. conf file.

The content of/opt/nginx/conf/proxy. conf is:

Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_connect_timeout 90;
Proxy_send_timeout 90;
Proxy_read_timeout 90;
Proxy_buffer_size 4 k;
Proxy_buffers 4 32 k;
Proxy_busy_buffers_size 64 k;
Proxy_temp_file_write_size 64 k;

The proxy function of Nginx is implemented through the http proxy module. By default, the http proxy module has been installed when Nginx is installed. Therefore, you can directly use the http proxy module. The following describes in detail the meaning of each option in the proxy. conf file.

Proxy_set_header: sets the backend server to obtain the user's host name or real IP address, and the real IP address of the proxy.
Client_body_buffer_size: Used to specify the buffer size of the client request body. It can be understood that the buffer is saved locally before being passed to the user.
Proxy_connect_timeout: the timeout time for connecting to the backend server, that is, the timeout time for initiating a handshake and waiting for response.
Proxy_send_timeout: indicates the data return time of the backend server. That is, the backend server must transmit all data within the specified time. Otherwise, Nginx will disconnect the connection.
Proxy_read_timeout: sets the time for Nginx to obtain information from the proxy's backend server, indicating that Nginx waits for the response time of the backend server after the connection is established successfully, in fact, it is the time that Nginx has entered the backend queue for processing.
Proxy_buffer_size: Set the buffer size. By default, the buffer size is equal to the size set by the instruction proxy_buffers.
Proxy_buffers: set the number and size of the buffer. The response information that nginx obtains from the proxy's backend server is placed in the buffer zone.
Proxy_busy_buffers_size: used to set the proxy_buffers size that can be used when the system is busy. The size officially recommended is proxy_buffers * 2.
Proxy_temp_file_write_size: specifies the size of the temporary file cached by the proxy.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.