Nginx Load Balancing Detailed introduction _nginx

Source: Internet
Author: User
Tags hash nginx server nginx load balancing

If there is only one server, this server is dead, so it would be a disaster for the site. Therefore, the load balance at this time will be able to do, it will automatically remove the suspended server.

The following is a brief introduction to my experience of using Nginx to do the load

Download---Installation Nginx These are not introduced, the previous article has introduced.

Windows and Linux under the configuration Nginx load of the same writing, it is not separately introduced.

Nginx load balancing some basic knowledge:

Nginx's upstream currently supports 4 different ways of allocating
1), polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically removed if the backend server is down.
2), Weight
Specifies the polling probability, proportional to the weight and the access ratio, for the performance of the backend server.
2), Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to a back-end server that resolves the session's problem.
3), Fair (third party)
The response time of the backend server is allocated to the request, and the response time is short for priority assignment.
4), Url_hash (third party)

Configuration:

Add in HTTP node:

#定义负载均衡设备的 IP and Device status
Upstream MyServer {

Server 127.0.0.1:9090 down;
Server 127.0.0.1:8080 weight=2;
Server 127.0.0.1:6060;
Server 127.0.0.1:7070 backup;
}

Add under server node that needs to use load

Proxy_pass Http://myServer;

Upstream the status of each device:

Down indicates that the server before the order temporarily does not participate in the load
The larger the weight defaults to 1.weight, the greater the weight of the load.
Max_fails: The number of allowed requests failed defaults to 1. Returns the error Proxy_next_upstream module definition when the maximum number of times is exceeded
Fail_timeout:max_fails after a failure, the time of the pause.
Backup: All other non-backup machines request backup machines when down or busy. So this machine will be the lightest pressure.

Nginx also supports multiple sets of load balancing, and multiple upstream can be configured to serve different servers.

Configuring load balancing is simpler, but one of the most critical issues is how to implement session sharing among multiple servers

Here are a few methods (the following comes from the network, and the fourth method has no practice.)

1) Do not use session, as a cookie

Can change the session into a cookie, you can avoid some of the drawbacks of the session, in the past to see a book---EE, also indicated in the cluster system can not use session, otherwise provoke out of scourge to be difficult to do. If the system is not complex, prioritize the session can be removed, the change is very cumbersome, and then use the following method.

2) The application server realizes sharing by itself

ASP.net can use a database or memcached to save the session, thereby establishing a session cluster in ASP.net itself, in such a way that the session can be guaranteed to be stable, even if a node is faulty, the session will not be lost, Apply to more stringent but not high demand occasions. But its efficiency is not very high, does not apply to the high efficiency requirements of the occasion.

All of the above two methods have nothing to do with Nginx, the following is said with Nginx how to deal with:

3) Ip_hash

The Ip_hash technology in Nginx can direct an IP request to the same backend, so that a client under this IP and a backend can establish a solid session,ip_hash is defined in the upstream configuration:

Upstream Backend {
Server 127.0.0.1:8080;
Server 127.0.0.1:9090;
Ip_hash;
}

Ip_hash is easy to understand, but because only the IP factor can be used to allocate the backend, the Ip_hash is flawed and cannot be used in some cases:

1/nginx is not the most front-end server. Ip_hash requirements Nginx must be the most front-end server, otherwise nginx not get the correct IP, can not be based on IP hash. For example, the use of squid as the most front-end, then Nginx IP can only get Squid server IP address, using this address to shunt is definitely confused.

There are other ways of load balancing on the back end of the 2/nginx. If there is another load balance on the back end of the Nginx and the request is diverted in another way, then a client's request must not be positioned on the same session application server. In this way, the Nginx backend can only point directly to the application server, or a squid, and then point to the application server. The best way to do this is to use location for a diversion, which will require part of the session to be diverted through Ip_hash, leaving the rest to the back end.

4) Upstream_hash

To solve some of the ip_hash problems, you can use Upstream_hash, a third-party module, which is used in most cases as url_hash, but does not prevent it from being used for session sharing:

If the front end is squid, he will add IP to x_forwarded_for this http_header, with Upstream_hash can use the head to do the factor, the request to the specified back end:

Visible This document: http://www.sudone.com/nginx/nginx_url_hash.html

In the document is to use $request_uri to do the factor, a slight change:

Hash $http _x_forwarded_for;

This is changed to the use of x_forwarded_for this head as a factor, in the new version of Nginx can support reading cookie value, so can also be changed to:

Hash $cookie _jsessionid;

If the session configured in PHP is no cookie, with Nginx own Userid_module module can be used Nginx spontaneous a cookie, see the UserID module in English documents:
Http://wiki.nginx.org/NginxHttpUserIdModule
Another module that can be written with Yiu Weibin upstream_jvm_route:http://code.google.com/p/nginx-upstream-jvm-route/

PS: Continue to call for help, why are the page styles deployed on the Nginx server displayed incorrectly?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.