Server Load balancer

Source: Internet
Author: User
Tags nginx load balancing

Nginx can be used not only as a powerful web server, but also as a reverse proxy server. nginx can also implement dynamic and static page separation according to scheduling rules, server Load balancer can be performed on backend servers in multiple ways, such as round robin, IP hash, URL hash, and weight. Health Check of backend servers is also supported.

If there is only one server and the server goes down, it will be a disaster for the website. Therefore, the Server Load balancer will show its strength at this time, and it will automatically remove the server.

The following is a brief introduction of my experience using nginx for load balancing.

Download --- install nginx. This is not described in the previous article.

Nginx load configuration is the same in Windows and Linux.

Some basic knowledge about nginx load balancing:

Currently, nginx upstream supports four allocation methods.
1) Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
2) Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
2) ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
3) Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
4), url_hash (third-party)

Configuration:

Add:

# Define the IP address and device status of the Server Load balancer Device

Upstream myserver {

Server 127.0.0.1: 9090 down;
Server 127.0.0.1: 8080 Weight = 2;
Server 127.0.0.1: 6060;
Server 127.0.0.1: 7070 backup;
}

Add

Proxy_pass http: // myserver;

The status of each device in upstream:

Down indicates that the server before a ticket is not involved in the load
The default weight value is 1. The larger the weight value, the larger the load weight.
Max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
Fail_timeout: The pause time after max_fails failed.
Backup: Requests the backup machine when all other non-Backup machines are down or busy. Therefore, this machine is under the least pressure.

Nginx also supports multiple groups of Server Load balancer instances. You can configure multiple upstreams to serve different servers.

Configuring Server Load balancer is simple, but the most critical issue is how to share sessions among multiple servers.

There are several methods below (the following content comes from the network, and the fourth method has no practice .)

1) use cookie instead of session

By changing the session into a cookie, you can avoid some drawbacks of the session. In a previously read J2EE book, it also indicates that the session cannot be used in the cluster system, otherwise, it will be difficult to solve the problem. If the system is not complex, consider removing the session first. If the modification is very troublesome, use the following method.

2) the application server shares the data on its own.

Asp.net can use a database or memcached to store sessions, so that a session cluster is established in Asp.net itself. In this way, session stability can be ensured even if a node fails, the session will not be lost, which is suitable for scenarios with strict but low request volumes. However, the efficiency is not very high, and it is not applicable to scenarios with high efficiency requirements.

The above two methods have nothing to do with nginx. The following describes how to deal with nginx:

3) ip_hash

Ip_hash Technology in nginx can direct requests from an IP address to the same backend, so that a client and a backend under this IP address can establish a stable session, ip_hash is defined in upstream Configuration:

Upstream backend {
Server 127.0.0.1: 8080;
Server 127.0.0.1: 9090;
Ip_hash;
}

Ip_hash is easy to understand. However, ip_hash is flawed because it can only be used to allocate backend IP addresses. In some cases, ip_hash cannot be used:

1/nginx is not the frontend server. Ip_hash requires nginx to be the frontend server. Otherwise, if nginx fails to obtain the correct IP address, it cannot be used as a hash Based on the IP address. For example, if squid is used as the frontend, only the squid Server IP address can be obtained when nginx obtains the IP address. It is certainly confusing to use this address for traffic distribution.

2. There are other load balancing methods at the nginx backend. If the nginx backend has another Server Load balancer and requests are diverted in another way, the requests of a client cannot be located on the same session application server. In this case, the nginx backend can only direct to the application server, or create another squid and then point to the application server. The best way is to use location for one-time traffic distribution. Part of the requests that require session are distributed through ip_hash, and the rest are distributed through other backend servers.

4) upstream_hash

To solve ip_hash problems, you can use upstream_hash, a third-party module, which is generally used as url_hash, but does not prevent it from being used for session sharing:

If the front-end is squid, it will add the IP address to the x_forwarded_for http_header. With upstream_hash, this header can be used as a factor to direct the request to the specified backend:

See this document: http://www.sudone.com/nginx/nginx_url_hash.html

In this document, $ request_uri is used as a factor. Change it a bit:

Hash $ http_x_forwarded_for;

In this way, the header x_forwarded_for is used as a factor. In the new nginx version, the cookie value can be read, so you can also change it:

Hash $ cookie_jsessionid;

If the session configured in PHP is non-cookie-free, you can use nginx to generate a cookie with a userid_module module of nginx. For more information, see the userid module's English documentation:
Http://wiki.nginx.org/NginxHttpUserIdModule
The upstream_jvm_route module can be compiled by Yao weibin: http://code.google.com/p/nginx-upstream-jvm-route/

Server Load balancer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.