"Nginx" configuration nginx for load balancing

Source: Internet
Author: User
Tags nginx reverse proxy nginx load balancing

In the article has been mentioned, enterprises in the resolution of high concurrency problems, generally have two directions of the processing strategy, software, hardware, add load balancer on the hardware to distribute a large number of requests, software on the high concurrency bottleneck: The database +web Server Two add a solution, One of the most commonly used add-on load scenarios for Web servers is to use Nginx for load balancing.

I. The role of load balancing

1. Forwarding function

According to a certain algorithm "weight, polling", the client requests forwarded to different application servers, reduce the pressure on a single server, improve system concurrency.

2. Fault removal

Through heartbeat detection, determine whether the application server is currently working properly, if the server period is down, automatically send the request to another application server.

3. Restore add

If a failed application server recovery work is detected, it is automatically added to the processing user request queue.

Second, nginx to achieve load balancing
Also use two Tomcat to simulate two application servers, with port numbers 8080 and 8081, respectively.

1. Nginx Load Distribution Strategy

Nginx's upstream currently supports the allocation algorithm:
1), poll--1:1 turn processing requests (default)

Each request is assigned to a different application server in chronological order, and if the application server is down and automatically rejected, the remaining polling continues.
2), weight--you can you up
By configuring weights, you specify the polling probability, which is proportional to the weight and access ratio, for applications where server performance is uneven.
3), IP_ hashing algorithm
Each request is allocated according to the hash result of the access IP, so that each visitor fixed access to an application server can solve the problem of session sharing.

2. Configuring Nginx load Balancing and distribution strategy

This can be achieved by adding the specified parameters after the application server IP that is added to the upstream parameter, such as:

Upstream Tomcatserver1 {      server 192.168.72.49:8080 weight=3;      Server 192.168.72.49:8081;      }      server {          listen       ;          server_name  8080.max.com;          #charset Koi8-r;          #access_log  logs/host.access.log  main;          Location/{              proxy_pass   http://tomcatserver1;              Index  index.html index.htm;          }       

Through the above configuration, can be implemented, in the visit 8080.max.com this site, because the configuration of Proxy_pass address, all requests will first through the Nginx reverse proxy server, when the server forwarded the request to the destination host, read upstream to Tomcatsever1 address, read the distribution policy, configure the TOMCAT1 weight of 3, so Nginx will send most of the request to the 49 server TOMCAT1, that is, 8080 port, less to TOMCAT2 to achieve conditional load balancing, Of course, this condition is server 1, 2 hardware index processing request ability.

3, Nginx other configuration

Upstream MyServer {          server 192.168.72.49:9090 down;       Server 192.168.72.49:8080 weight=2;       Server 192.168.72.49:6060;       Server 192.168.72.49:7070 backup;   }  

1) down

Indicates that the server is temporarily not participating in the load

2) Weight

The larger the default is 1.weight, the greater the load weight.

3) Max_fails

The number of times a request failure is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded

4) Fail_timeout

Max_fails the time of the pause after the failure.

5) Backup

When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Third, the use of nginx high-availability

In addition to the high availability of the site, which is to provide more than n servers to publish the same services, add load Balancer Server distribution requests to ensure that each server in high concurrency can be relatively saturated processing requests. Similarly, the Load Balancer server needs to be highly available in case the server behind the load balancer is out of service and the subsequent application server is not working.

Implement a highly Available scenario: add redundancy. Add n nginx servers to avoid any of these single points of failure. Detailed scenarios are detailed below: Keepalive+nginx for load balancing high availability

Iv. Summary

To sum up, load balancing, whether it is a variety of software or hardware solutions, the main or a large number of concurrent requests in accordance with certain rules distributed to different server processing, thereby reducing the instantaneous pressure on a server, improve the anti-concurrency of the site. Nginx in the application of load balancing widely, I think this is due to its flexible configuration, a nginx.conf file to solve most of the problems, whether it is nignx to create a virtual server, Nginx reverse proxy server, or the nginx load balancer described in this article, is almost always done in this configuration file. The server is only responsible for the Nginx set, run up can be. And it itself is lightweight, does not need to occupy the server too many resources can achieve better results, greasy harm.

"Nginx" configuration nginx for load balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.