Nginx Load Balancing Configuration

Source: Internet
Author: User
Tags nginx load balancing

Original http://nginx.org/en/docs/http/load_balancing.html

This article is translated according to your own understanding

Load Balancing among multiple applications is a commonly used technology to improve resource utilization, increase server throughput, reduce latency, and ensure application fault tolerance. Nginx can serve as an efficient HTTP Load balancer to distribute loads to multiple applications to improve performance. It is also a reliable and scalable Web application server.

Server Load balancer category
Round Robin-applications take turns to respond to requests
Least connections-requests are allocated to servers with the least active connections
IP-hash-a hash function is used to determine which server to respond to a user's request (based on the client's request IP address)

The following is the simplest Server Load balancer configuration.

http {    upstream myapp1 {        server srv1.example.com;        server srv2.example.com;        server srv3.example.com;    }    server {        listen 80;        location / {            proxy_pass http://myapp1;        }    }}

In the above configuration, the server srv1-srv3 of the three identical applications, the default load balancing method is round robin, the requests used by the reverse proxy to the myapp1 group, nginx distributes these requests to three services through Server Load balancer.
.

In nginx, load balancing for HTTP, https, FastCGI, uwsgi, scgi, and memcached. is implemented through reverse proxy.
To configure HTTPS Server Load balancer, you only need to change the HTTP protocol to HTTPS. Other configurations remain unchanged.
To implement FastCGI, uwsgi, scgi, or memcached load balancing, you can use the fastcgi_pass, uwsgi_pass, scgi_pass, and memcached_pass commands respectively.

Another load balancing method is least-connected, the least-connected method can be used to distribute the load evenly to multiple machines in the scenario where the request takes a longer time to complete. With least-connected, nginx does not distribute requests to busy machines and distributes new requests to idle machines.
You can configure the least_conn; command in the upstream {} module to activate the least-connected load mode.

upstream myapp1 {        least_conn;        server srv1.example.com;        server srv2.example.com;        server srv3.example.com;    }


The polling and least-connected Load Balancing Methods distribute new requests to different machines. It is difficult to ensure that each client accesses a server.

If you want a client to access only a fixed server, you can use the IP-Hash load balancing method.
When IP-hash is used, the client's IP address is used as a hash key to determine which server in the server group to respond to the request. In this way, each client can access the same server each time.
You can use the ip_hash command to configure the ip_hash Load Balancing mode.

upstream myapp1 {    ip_hash;    server srv1.example.com;    server srv2.example.com;    server srv3.example.com;}

Weight-Based Load Balancing mode. configuring weights for each server based on round-robin ensures that a server can process as many requests as possible.
upstream myapp1 {        server srv1.example.com weight=3;        server srv2.example.com;        server srv3.example.com;    }
In the above configuration, if there are five requests, three will be allocated to serv1, one will be allocated to srv2, and one will be allocated to srv3. If weight is not specified, it is flat by default.


Server Load balancer includes server health check. If a request is assigned to a server and the server cannot respond, nginx will mark it as failed. In a short time, nginx does not allocate subsequent requests to servers marked as failed. You can set the maximum number of failures in the max_fails command. The default value is 1. You must set fail_timeout first. Fail_timeout indicates how many seconds the response time exceeds to mark the server as a failure. After the server flag fails, nginx uses several client requests to gracefully detect the server. If the detection succeeds, the server is marked successfully.

More Load Balancing commands: proxy_next_upstream, backup, down, andkeepalive.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.