Nginx load balancing in layman's

Source: Internet
Author: User
Tags nginx load balancing

Nginx not only as a powerful Web server, but also as a reverse proxy server, and Nginx can also follow the schedule rules to achieve dynamic, static separation of the page, you can follow polling, IP hash, url hash, weight and other ways to load balance the backend server, It also supports health checks for back-end servers.

If there is only one server, the server hangs up, then it is a disaster for the website. Therefore, the load balancer will be in the spotlight, and it will automatically eliminate the servers that are hung out.

The following is a brief introduction of my use of nginx load experience

Download---Install Nginx These are not introduced, the previous article has introduced.

Windows and Linux under the configuration of the Nginx load the same way, so do not separate introduction.

Nginx Load Balancing Some basics:

Nginx's upstream currently supports 4 different ways of distribution
1), polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
2), Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
2), Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
3), Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
4), Url_hash (third party)

Configuration:

Add in the HTTP node:

#定义负载均衡设备的 IP and Device status

Upstream MyServer {

Server 127.0.0.1:9090 down;
Server 127.0.0.1:8080 weight=2;
Server 127.0.0.1:6060;
Server 127.0.0.1:7070 backup;
}

Add under the server node that needs to use the load

Proxy_pass Http://myServer;

Upstream the status of each device:

Down indicates that the server is temporarily not participating in the load
Weight by default, the larger the 1.weight, the greater the load weight.
Max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
Fail_timeout:max_fails the time of the pause after the failure.
Backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Nginx also supports multiple sets of load balancing, and multiple upstream can be configured to serve different servers.

Configuring load balancing is simple, but one of the most critical issues is how to implement session sharing between multiple servers

Here are a few methods (the following is from the network, the fourth method is not practiced.)

1) Do not use the session, in exchange for cookies

Can change the session to a cookie, you can avoid some of the shortcomings of the session, in the previous read a book on the Java EE, also indicated in the cluster system can not use the session, or provoke out scourge to do. If the system is not complex, it is preferred to be able to remove the session, the change is very troublesome, then use the following method.

2) The application server realizes its own sharing

ASP. NET can use the database or memcached to save the session, so that in the net itself set up a session cluster, in such a way can make the session to ensure stability, even if a node has failed, the session will not be lost, Apply to the more stringent but not high demand for the occasion. But its efficiency is not very high, not for the high efficiency requirements of the occasion.

The above two methods are not related to Nginx, the following is how to deal with Nginx:

3) Ip_hash

The Ip_hash technology in Nginx can direct the request of an IP to the same back-end, so that a client and a backend in this IP can establish a solid session,ip_hash is defined in the upstream configuration:

Upstream Backend {
Server 127.0.0.1:8080;
Server 127.0.0.1:9090;
Ip_hash;
}

Ip_hash is easy to understand, but because the backend can only be assigned with the IP factor, the Ip_hash is flawed and cannot be used in some cases:

1/nginx is not the most front-end server. Ip_hash requirements Nginx must be the most front-end server, or nginx can not get the correct IP, it can not be based on IP as a hash. For example, the use of squid as the most front-end, then the Nginx IP can only get Squid server IP address, with this address to shunt is definitely confused.

There are other ways to load balance the 2/nginx backend. If the Nginx backend has other load balancing, and the request is diverted in another way, then a client's request must not be located on the same session application server. In this way, the Nginx backend can only point directly to the application server, or another squid, then point to the application server. The best way is to use location for a diversion, will need to part of the session request through the Ip_hash shunt, the rest of the other back end.

4) Upstream_hash

To solve some of the ip_hash problems, you can use Upstream_hash, the third-party module, which is used in most cases as url_hash, but does not prevent it from being used for session sharing:

If the front end is squid, he will add IP to x_forwarded_for this http_header, with Upstream_hash can use this head to do the factor, the request directed to the specified backend:

This document is visible: http://www.sudone.com/nginx/nginx_url_hash.html

In the document, use $request_uri as a factor and change it a little bit:

Hash $http _x_forwarded_for;

This is changed to use the x_forwarded_for as the first factor, in the new version of Nginx can support reading cookie value, so can also be changed to:

Hash $cookie _jsessionid;

If the session configured in PHP is no cookie, with Nginx own one of the Userid_module module can use Nginx spontaneous a cookie, you can see the UserID module in English documents:
Http://wiki.nginx.org/NginxHttpUserIdModule
Another module that can be written in Yiu Weibin upstream_jvm_route:http://code.google.com/p/nginx-upstream-jvm-route/

Nginx load balancing in layman's

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.