What about nginx Server Load balancer?

Source: Internet
Author: User
Tags nginx server

This section describes the problems encountered after nginx Server Load balancer is used:

    • Session Problems
    • File Upload/download

Generally, multi-server load splitting is used to solve server load problems. Common solutions include:

    • The website portal is connected to the server load through sub-stations (such as the sky software station and huajun Software Park)
    • DNS round robin
    • F5 physical device
    • Nginx and Other lightweight Architectures

Let's take a look at how nginx achieves load balancing. nginx upstream currently supports the following methods of allocation:
1. Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
2. Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
2. ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
3. Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
4. url_hash (third-party)
Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when it is cached.

How to Implement load in upstream Configuration

View code

 HTTP {

Upstream www.test1.com {
Ip_hash;
Server 172.16 . 125.76 : 8066 Weight = 10 ;
Server 172.16 . 125.76 : 8077 Down;
Server 172.16 . 0.18 : 8066 Max_fails = 3 Fail_timeout = 30 s;
Server 172.16 . 0.18 : 8077 Backup;
}

Upstream www.test2.com {
Server 172.16 . 0.21 : 8066 ;
Server 192.168 . 76.98 : 8066 ;
}


Server {
Listen 80 ;
SERVER_NAME www.test1.com;

Location / {
Proxy_pass http: // Www.test1.com;
Proxy_set_header host $ host;
Proxy_set_header x - Real - IP $ remote_addr;
Proxy_set_header x - Forwarded - For $ proxy_add_x_forwarded_for;
}
}

Server {
Listen 80 ;
SERVER_NAME www.test2.com;

Location / {
Proxy_pass http: // Www.test2.com;
Proxy_set_header host $ host;
Proxy_set_header x - Real - IP $ remote_addr;
Proxy_set_header x - Forwarded - For $ proxy_add_x_forwarded_for;
}
}

When a request arrives at www.test1.com/www.test2.com, the request is distributed to the server list set by the corresponding upstream. Each request distribution server of Test2 is random, which is listed in the first case. Test1 distributes requests to the specified server based on the hashid of the IP address. That is to say, all requests from the IP address are forwarded to the specified server.

You can set different parameter controls based on the performance differences and functions of the server.

Down indicates that the load is too heavy or not involved in the load

If the weight of weight is too large, the load will be larger.

The backup server is requested only when other servers are backed up or down.

If max_fails fails more than the specified number of times, the request is paused or forwarded to another server.

Fail_timeout

The above is a simple configuration of nginx Server Load balancer. Continue with our discussions in this section:

I. Session Problems

When we determine a series of load servers, our web sites will be distributed to these servers. In this case, if Test2 is used to randomly access any server, the next request is suddenly forwarded to server B after you access server. At this time, the session established with server a cannot respond normally if it is uploaded to the server of site B. Let's take a look at the common solutions:

    • The session or credential is cached on an independent server.
    • Session or credential saved in Database
    • Nginx ip_hash requests with the same IP address are all specified to a fixed Server

The first caching method is ideal, and the cache efficiency is also high. However, if each request server accesses the session server, isn't that a load on the session server?

The second type is stored in the database. In addition to controlling the validity period of the session, it also increases the burden on the database. Therefore, the final change is to SQL Server Load balancer, which involves reading, writing, expiration, and synchronization.

The third is to maintain sessions on the same server through the nginx ip_hash load, which looks the most convenient and lightweight.

Under normal circumstances, if the architecture is simple, ip_hash can solve the session problem, but let's take a look at the following situation:

At this time, all requests received by ip_hash come from a fixed IP proxy. If the proxy IP load is too high, the load on the server corresponding to ip_hash is too high, in this way, ip_hash will lose the role of Server Load balancer.

If the cache can achieve synchronization and sharing, we can solve the single overload problem through multiple session servers. Can memcached be used as a session cache server? Memcachedprovider provides the session function to save the session to the database. Why do we need to save it to the database instead of directly storing it in the database through memcached? It is very simple. If you save it directly to the database, you must verify the validity of each session request. Second, even if we create a layer of cache for the database, the cache cannot be shared in a distributed manner, or the load on the same cache server is too heavy. We can also see successful cases of implementing session caching with memcached on the Internet. Of course, database implementation is still quite common, such as the Open Source disuz.net forum. Small-scale distribution of cache implementation is also quite common. For example, single sign-on is also a special case.

Ii. File upload and download

If load balancing is implemented, in addition to session problems, we will also encounter file upload and download problems. It is impossible to upload files to different servers. This will cause the download of the corresponding files to fail. Let's take a look at the solution below.

    • Independent File Server
    • File compression Database

The two solutions are commonly used. Let's take a look at the file compression database. Previously, we used to compress binary files to relational databases. nosql is now popular, and MongoDB is more convenient to process files, therefore, the File compression library has another option. After all, the efficiency, management, and security of file servers are inferior to those of databases.

Let's just talk about this, which is actually the trend of some applications and the implementation of one more solution.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.