Bad mud: nginx load balancing, bad mud nginx Load Balancing

Source: Internet
Author: User
Tags nginx reverse proxy nginx load balancing

Bad mud: nginx load balancing, bad mud nginx Load Balancing

This article was sponsored by Xiuyi linfeng and first launched in the dark world.

Today, we will learn about nginx Server Load balancer configuration. Nginx Server Load balancer is implemented through the nginx upstream module and proxy_pass reverse proxy.

Note: There are three servers, and the front-end server A uses nginx for Load Balancing configuration. The backend is the same server with two configurations. For example, access the domain name a.ilanni.com. Structure:

Server A opens port 80 to the external (Public Network), and server B and server C are the two servers with the same configuration. Server B opens port 8080, and server C opens port 8090. When the client accesses the.ilanni.com domain name, server A allocates the client to access server B or server C based on the corresponding policies of the nginx upstream module.

Note that the content of server B and server C is the same. However, to see the experiment results, I configured different contents on server B and server C. The actual content of Server B's default page is: The Server is website192.168.1.249: 8080. The actual content of The default page of Server C is: The Server is web2_192.168.1.249: 8090. As follows:

By default, nginx Server Load balancer uses the round robin Method for allocation. The default weight is 1, and the higher the weight, the higher the chance of being accessed.

Configure nginx of server A as follows:

Cat/usr/local/nginx/conf/nginx. conf | grep-v ^ # | grep-v ^ $

Upstream a.ilanni.com {

Server 192.168.1.248: 8080;

Server 192.168.1.249: 8090;

}

Location /{

Proxy_pass http://a.ilanni.com;

Proxy_set_header Host $ host;

Proxy_set_header X-Real-IP $ remote_addr;

Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;

}

Note: The first part marked. Upstream is used for load balancing. Http://a.ilanni.com; is the domain name we want to access, through proxy_pass reverse proxy to the server under upstream.

The second part is the domain name for reverse proxy. Note that we are listening for port 80, and I have not configured server_name in this server label. In fact, configuring server_name is also acceptable, but the final effect is the same. This is tested.

The third part is a virtual host. The listening port is port 8080, And I have configured server_name here. This is mainly used for comparison.

Start nginx of server A and access a.ilanni.com: 8080. As follows:

We can see that nginx of server A can be accessed normally now. Note that we are accessing the http://a.ilanni.com: 8080 here.

Configure nginx for server B and server C as follows:

After configuring nginx on server B and server C, we can start and access their respective nginx services as follows:

We can see that nginx of server B and server C has been accessed normally. So I now visit http://a.ilanni.com to see if we can achieve what we want. As follows:

We can see that we now access the http://a.ilanni.com has reverse proxy to the B server under upstream, and now displays the B server content.

Refresh the page again, as shown below:

After refreshing the page, you will find that the content of the C server is displayed this time. It also indicates that the http://a.ilanni.com has reverse proxy to the C server.

You can refresh the page multiple times and find that the displayed content is displayed alternately on server B and server C.

Why?

As a matter of fact, I have already introduced the nginx upstream Server Load balancer. In the absence of other configurations, the default round robin method is used and the default weight is 1.

That is, upstream a.ilanni.com {

Server 192.168.1.248: 8080;

Server 192.168.1.249: 8090;

}

The default weight of server B and server C is the same as 1. In nginx round robin, server B and server C will alternate.

If we set the weight of server B to 5, server C still uses the default value to check the actual situation. The configuration is as follows:

Upstream a.ilanni.com {

Server 192.168.1.248: 8080 weight = 5;

Server 192.168.1.249: 8090;

}

Access the http://a.ilanni.com again, first displaying the contents of server B. Then refresh. You will find that the content of server C is displayed after about five refreshes. The higher the server weight, the more requests are distributed to the client.

Note that in the above experiment, the server A, server B, and server C are in the same LAN, but port 80 of server A is opened for the public network. If all the three servers are public IP addresses, this is what we will introduce in the next article. We will discuss how to use nginx reverse proxy.

At this point, we have basically finished introducing nginx Server Load balancer. The following describes how to allocate nginx upstream in the following ways:

1) Round Robin (default) each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be automatically removed.

2) weight specifies the polling probability. weight is directly proportional to the access ratio, which is used when the backend server performance is uneven. Set the server weight. The higher the weight value, the more requests are allocated to the client. The default value is 1.

3) ip_hash: each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.

4) fair (a third party) allocates requests based on the response time of the backend server. Requests with short response time are prioritized.

5) url_hash (a third party) allocates requests based on the hash result of the access url, so that each url is directed to the same backend server. The backend server is effective when it is cached.


How to distribute multiple nginx instances and achieve Load Balancing for large domestic websites?

This information is basically incomplete. Let me talk about a basic architecture:
1. If you have sufficient funds for the DNS server, we recommend using the BGP data center. Two or three DNS servers are balanced. bind software is usually used. If the funds are tight, you can purchase professional dns services, such as dnspod in China.
2. For CDN servers, you can buy services from professional companies, such as chinacache, if you want to save time in the first place. However, the development costs will increase. If you build your own servers, you may set up servers in different data centers, such as China Telecom, China Unicom, and China Mobile, and use dns for dynamic resolution. For ultra-large websites, you can use Squid, nginx for medium to large sizes, and varnish for internal use.
3. Balance the front-end. If you have enough funds, you can use up to 100,001 hardware devices. If you already have a technical team, use nginx/haproxy + keepalived to build your own front-end. Balanced methods are flexible, with random, weight, ip, and url options.
4. Synchronization depends on what to synchronize. Common files can be synchronized in real time. However, if the database is used, you must select the synchronization mode for the specific type.
5. the backend application servers and database clusters should be based on traffic planning.

Nginx load balancing problems

My Server Load balancer is implemented in this way.
Upstream abc # com {
Server 1.2.3.1: 80;
Server 1.2.3.4: 80 ;}
Server {
Listen 80;
Server_name abc # com;
Location /{
Proxy_pass abc # com /;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for ;}
}

Change #.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.