Nginx load Balancing.

Source: Internet
Author: User
Tags memcached nginx load balancing

Objective:

With the explosive growth of Internet information, load balancing (loading balance) is no longer a very unfamiliar topic, as the name implies, load balance is to spread the load to different service units, not only to ensure the availability of services, but also to ensure that the response fast enough to give users a good experience. Fast-growing traffic and data traffic have spawned a wide range of load-balancing products, and many professional load-balancing hardware provides good functionality, but it's expensive, which makes load-balancing software popular and Nginx is one of them.

The first public version of Nginx was released in 2004, and the 1.0 version was released in 2011. It is characterized by high stability, strong function, low resource consumption, from its current market possession, Nginx and Apache Rob the market momentum. One feature that has to be mentioned is its load-balancing function, which has become a major reason for many companies to choose it. This article will introduce Nginx's built-in load balancing strategy and extended load balancing strategy from the point of view of source, take the actual industrial production as a case, compare each load balance strategy, provide reference for Nginx users.

In this section, we'll talk about the problems encountered after using Nginx load Balancing: Session problem file upload download

Typically, server load problems are resolved through multiple server distribution. Common solutions are: Web portal through the link load (Sky Software station, Huajun Software Park, etc.) DNS polling F5 physical device Nginx and other lightweight architectures

So let's look at how nginx is load-balanced, and Nginx's upstream currently supports the allocation of the following several ways
1. Polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically removed if the backend server is down.
2, Weight
Specifies the polling probability, proportional to the weight and the access ratio, for the performance of the backend server.
2, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to a back-end server that resolves the session's problem.
3, Fair (third party)
The response time of the backend server is allocated to the request, and the response time is short for priority assignment.
4, Url_hash (third party)
The request is allocated by the hash result of the access URL, which directs each URL to the same back-end server, which is more efficient when cached.

Upstream configuration How to implement load View Code

When a request is sent to www.test1.com/www.test2.com, the request is distributed to the server list of the corresponding upstream settings. Test2 each request to distribute the server is random, is the first case listed. The test1 is distributed to the specified server based on the Hashid to access the IP, which means the IP request is transferred to the specified server.

Depending on the performance differences and functions of the server itself, different parameter controls can be set.

Down indicates that the load is too heavy or not involved

The greater the weight of the weight, the heavier the load.

Backup server is not requested for backup other servers or down

Max_fails or request to another server if the failure exceeds a specified number of times

Pause time after fail_timeout failure exceeds a specified number of times

The above is a simple configuration of nginx load balancing. Then continue our discussion in this section:

First, the session question

When we identify a range of servers, our web sites are distributed to these servers. This time, if the use of Test2 every request random access to any server, so that you access to a server, the next request suddenly go to the B server. This time with a server established session, to the B site server is certainly not the normal response. Let's take a look at a common solution: Session or credential caching to a separate server session or credential save database Nginx Ip_hash The request to maintain the same IP is specified to a fixed server

The first method of caching is ideal, and the efficiency of caching is relatively high. However, each request server to access the session sessions server, it is not the load of the server to add the burden of it.

The second save to the database, in addition to controlling the duration of the session, while increasing the burden on the database, so the final transition to SQL Server load balancing, involving reading, writing, expiration, synchronization.

The third is to maintain a session on the same server through the Nginx ip_hash load, which looks most convenient and lightweight.

Normally, if the architecture is simple, Ip_hash can solve the session problem, but let's look at the following

This time Ip_hash received requests are from the fixed IP proxy request, if the agent IP overload will lead to ip_hash corresponding server load pressure is too large, so ip_hash lost the role of load balancing.

If the cache can be shared synchronously, we can solve a single overload problem through a multiple session server. That memcached can do session cache server. Memcachedprovider provides a session function to save the session to the database. Why not save it directly to the database and save it to the database through memcached? Quite simply, if you save it directly to the database, each request session validity is returned to the database for verification. Second, even if we create a layer of caching for the database, the cache cannot implement distributed sharing or overload the same cache server. Online also see useful memcached implementation Session cache success stories, of course, the database way to achieve or more commonly used, such as open source Disuz.net forum. Cache implementation of a small range of distributed is also more common, such as single sign-on is a special case.

Ii. File Upload and download

If the load balance is achieved, in addition to the session problem, we will also encounter file upload download problem. Files cannot be uploaded to different servers, which can cause problems with downloading files that are not available. Let's take a look at the following scenario standalone file server file compression database

Both schemes are commonly used, we say that the file compression database, the previous way is the file binary compression to the relational database, and now NoSQL popular, plus mongodb processing files and more convenient, so file engross another option. After all, file servers are not as efficient and manageable and secure as databases.

Casually talk about this matter, in fact, some of the application trend and the realization of a number of solutions.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.