Nginx Load Balancing

Source: Internet
Author: User
Tags tomcat server nginx load balancing

Nginx Load Balancer 1.Nginx Proxy Service Overview

Agent we are often not unfamiliar, the service we often use such as (Agent rental, agent receipt, etc.)

Then in the Internet request, the client can not directly to the server to initiate the request, then you need to use the Proxy service to implement client and service communication

NginxAs a proxy service can implement a lot of protocol agent, we mainly http agent-based

Forward proxy (internal Internet) client <--> Agent--server

Proxy <--> service side, reverse proxy client

正向与反向代理的区别

区别在于代理的对象不一样正向代理代理的对象是客户端 = 公司内部的客户机通过负载访问百度反向代理代理的对象是服务端 = 外网的用户通过负载访问公司内网web服务器

?

1.1Nginx Proxy configuration syntax

1. Nginx Proxy configuration syntax

Syntax: proxy_pass URL;Default:    —Context:    location, if in location, limit_excepthttp://localhost:8000/uri/http://192.168.56.11:8000/uri/http://unix:/tmp/backend.socket:/uri/

2. url Jump Modify return Location [infrequently used] referenceURL

Syntax: proxy_redirect default;proxy_redirect off;proxy_redirect redirect replacement;Default:    proxy_redirect default;Context:    http, server, location

3. Add the request header information to the backend server

Syntax: proxy_set_header field value;Default:    proxy_set_header Host $proxy_host;            proxy_set_header Connection close;Context:    http, server, location用户请求的时候HOST的值是www.bgx.com, 那么代理服务会像后端传递请求的还是www.bgx.comproxy_set_header Host $http_host;将$remote_addr的值放进变量X-Real-IP中,$remote_addr的值为客户端的ipproxy_set_header X-Real-IP $remote_addr;客户端通过代理服务访问后端服务, 后端服务通过该变量会记录真实客户端地址proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

4. Proxy-to-backend TCP connection, response, return, etc. time-out

//nginx代理与后端服务器连接超时时间(代理连接超时)Syntax: proxy_connect_timeout time;Default: proxy_connect_timeout 60s;Context: http, server, location//nginx代理等待后端服务器的响应时间Syntax: proxy_read_timeout time;Default:    proxy_read_timeout 60s;Context:    http, server, location//后端服务器数据回传给nginx代理超时时间Syntax: proxy_send_timeout time;Default: proxy_send_timeout 60s;Context: http, server, location

5.proxy_buffer Proxy Buffers

//nignx会把后端返回的内容先放到缓冲区当中,然后再返回给客户端,边收边传, 不是全部接收完再传给客户端Syntax: proxy_buffering on | off;Default: proxy_buffering on;Context: http, server, location//设置nginx代理保存用户头信息的缓冲区大小Syntax: proxy_buffer_size size;Default: proxy_buffer_size 4k|8k;Context: http, server, location//proxy_buffers 缓冲区Syntax: proxy_buffers number size;Default: proxy_buffers 8 4k|8k;Context: http, server, location

6.Proxy proxy site commonly used optimization configuration is as follows, the configuration is written to a new file, called when using the include reference

[[email protected] ~]# vim /etc/nginx/proxy_paramsproxy_set_header Host $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_connect_timeout 30;proxy_send_timeout 60;proxy_read_timeout 60;proxy_buffering on;proxy_buffer_size 32k;proxy_buffers 4 128k;

?
?
How the agent's location is called

location / {    proxy_pass http://127.0.0.1:8080;    include proxy_params;}
1.2Nginx Reverse Proxy Example
[[email protected] ~]# cat /etc/nginx/conf.d/proxy.confserver {    listen 80;    server_name nginx.bgx.com;    index index.html;    location / {    proxy_pass http://10.0.0.7:8080;    include proxy_params;    }}

2. Backend really provides Web service configuration

[[email protected] ~]# cat /etc/nginx/conf.d/images.confserver {    listen 8080;    server_name nginx.bgx.com;    location / {        root /code;        index index.html;    }}
2.Nginx Load Balancing

WebServer, direct user-oriented, often to be hosted 大量并发请求 , a single server 难以负荷 , I use a number of servers, the WEB front- 集群 End use Nginx``负载均衡 , will be requested 分散 to hit our back-end server cluster, implementation 负载的分发 . Then it will greatly improve the system吞吐率、请求性能、高容灾

2.1Nginx Load balancer divided by layer

Load balancing by layer scenario: four-layer load balancing

Load balancing by layer application scenario: seven-layer load balancing, nginx most commonly used

2.2Nginx Load Balancer Configuration Scenario

NginxThe proxy module configuration is required for load balancing proxy_pass .
NginxLoad balancing is the forwarding of client request agents to a set of upstream virtual service pools

Nginx Upstream virtual configuration syntax

Syntax: upstream name { ... }Default: -Context: http//upstream例子upstream backend {    server backend1.example.com       weight=5;    server backend2.example.com:8080;    server unix:/tmp/backend3;    server backup1.example.com:8080   backup;}server {    location / {        proxy_pass http://backend;    }}

Create the corresponding HTML file on the 1.WEB service

[[email protected] ~]# mkdir /soft/{code1,code2,code3} -p[[email protected] ~]# cat /soft/code1/index.html

2. Set up the corresponding releserver.conf configuration file

[[email protected] ~]# cat /etc/nginx/conf.d/releserver.conf server { listen 8081; root /code1; index index.html;}server { listen 8082; root /code2; index index.html;}server { listen 8083; root /code3; index index.html;}

3. Configuring Nginx Load Balancing

[[email protected] ~]# cat /etc/nginx/conf.d/proxy.conf upstream node { server 10.0.0.7:8081; server 10.0.0.7:8082; server 10.0.0.7:8083;}server { server_name load.bgx.com; listen 80; location / { proxy_pass http://node; include proxy_params; }}
2.3Nginx load-balanced backend status

Status of back-end Web servers in front-end Nginx load Balancing scheduling

状态  概述down    当前的server暂时不参与负载均衡backup  预留的备份服务器max_fails   允许请求失败的次数fail_timeout    经过max_fails失败后, 服务暂停时间max_conns   限制最大的接收连接数

1. Test down the status and test the Server schedule that does not participate in load balancing

upstream load_pass {    //不参与任何调度, 相当于注释    server 10.0.0.7:80 down;}

2. Test backup and down status

upstream load_pass {    server 10.0.0.7:80 down;    server 10.0.0.8:80 backup;    server 10.0.0.9:80 max_fails=1 fail_timeout=10s;}location  / {    proxy_pass http://load_pass;    include proxy_params;}
2.4Nginx Load Balancing scheduling algorithm
调度算法    概述轮询  按时间顺序逐一分配到不同的后端服务器(默认)weight  加权轮询,weight值越大,分配到的访问几率越高ip_hash 每个请求按访问IP的hash结果分配,这样来自同一IP的固定访问一个后端服务器url_hash    按照访问URL的hash结果来分配请求,是每个URL定向到同一个后端服务器least_conn  最少链接数,那个机器链接数少就分发

1. Nginx Load Balancer [wrr] Polling specific configuration

upstream load_pass {    server 10.0.0.7:80;    server 10.0.0.8:80;}默认是wrr轮询

2.Nginx load balancer [weight] weight polling specific configuration

upstream load_pass {    server 10.0.0.7:80 weight=5;    server 10.0.0.8:80;}

3.Nginx Load Balancer Ip_hash specific configuration and cannot be used with weight.

//如果客户端都走相同代理, 会导致某一台服务器连接过多upstream load_pass {    ip_hash;    server 10.0.0.7:80 weight=5;    server 10.0.0.8:80;}
2.5Nginx Load Balancing TCP Practice

Configure NGINX4 layer load Balancing to achieve the following requirements

1.通过访问负载均衡的5555端口,实际是后端的web01的22端口在提供服务。2.通过访问负载均衡的6666端口,实际是后端的mysql的3306端口在提供服务。

Nginx four-layer load balancing example

stream {    upstream backend {        hash $remote_addr consistent;        server backend1.example.com:12345 weight=5;        server 127.0.0.1:12345            max_fails=3 fail_timeout=30s;        server unix:/tmp/backend3;    }    server {        listen 12345;        proxy_connect_timeout 1s;        proxy_timeout 3s;        proxy_pass backend;    }}

Practice Nginx Four-layer load balancing

[[email protected] ~]# mkdir -p /etc/nginx/conf.c[[email protected] ~]# vim /etc/nginx/nginx.conf# 在events层下面,http层上面配置includeinclude  /etc/nginx/conf.c/*.conf;# 编写四层代理配置[[email protected] ~]# cd /etc/nginx/conf.c/[[email protected] conf.c]# cat stream.conf stream {    #1.定义虚拟资源池    upstream ssh {        server 172.16.1.7:22;    }    upstream mysql {        server 172.16.1.51:3306;    }    #2.调用虚拟资源池    server {        listen 5555;        proxy_connect_timeout 1s;        proxy_timeout 300s;        proxy_pass ssh;    }    server {        listen 6666;        proxy_connect_timeout 1s;        proxy_timeout 300s;        proxy_pass mysql;    }}[[email protected] conf.c]# systemctl restart nginx
3.Nginx Static and dynamic separation

Separation of dynamic requests and static requests through middleware, separating resources, reducing unnecessary request consumption and reducing request delay.
Benefit: Static resources will not be affected even if dynamic services are unavailable after static separation

Separating dynamic requests from static requests through the middleware

3.1Nginx Static and dynamic separation application case

0. Environmental preparedness

系统  服务  服务  地址CentOS7.5   负载均衡    Nginx Proxy 10.0.0.5CentOS7.5   静态资源    Nginx Static    10.0.0.7CentOS7.5   动态资源    Tomcat Server   10.0.0.8

1. Configure static resources on the 10.0.0.7 server

[[email protected] conf.d]# cat ds_oldboy.confserver{        listen 80;        server_name ds.oldboy.com;        root /soft/code;        index index.html;        location ~* .*\.(png|jpg|gif)$ {                root /soft/code/images;        }}# 准备目录, 以及静态相关图片[[email protected] ~]# mkdir /soft/code/images -p[[email protected] ~]# wget -O /soft/code/images/nginx.png http://nginx.org/nginx.png[[email protected] ~]# systemctl restart nginx

2. Configure dynamic resources on the 10.0.0.8 server

[[email protected] ~]# yum install -y tomcat[[email protected] ~]# systemctl start tomcat[[email protected] ~]# mkdir /usr/share/tomcat/webapps/ROOT[[email protected] ~]# vim /usr/share/tomcat/webapps/ROOT/java_test.jsp<%@ page language="java" import="java.util.*" pageEncoding="utf-8"%><HTML>    <HEAD>        <TITLE>JSP Test Page</TITLE>    </HEAD>    <BODY>      <%        Random rand = new Random();        out.println("

3. Configure scheduling on Load Balancer 10.0.0.5 to access JSP and PNG

[email protected] conf.d]# cat ds_proxy.conf upstream static { server 10.0.0.7:80;}upstream java { server 10.0.0.7:8080;}server { listen 80; server_name ds.oldboy.com; location / { root /soft/code; index index.html; } location ~ .*\.(png|jpg|gif)$ { proxy_pass http://static; include proxy_params; } location ~ .*\.jsp$ { proxy_pass http://java; include proxy_params; }}[[email protected] conf.d]# systemctl restart nginx

4. Accessing static resources through a load test
5. Accessing dynamic resources through a load test

6. Integrating dynamic and static resources on the Load Balancer 10.0.0.5 HTML file

  [[email protected] ~]# mkdir/soft/code-p[[email protected] ~]# cat/soft/code/index.html< HTML lang= "en" > 

7. Test whether the dynamic and static resources can be loaded in an HTML file normally
8. When you stop nginx using Systemctl stop Nginx, you will find that the static content is inaccessible and the dynamic content is still working.
9. When you stop Tomcat using Systemctl stop tomcat, the static content is still accessible and the dynamic content will not be requested to

3.2Nginx Mobile Computer Application case

Depending on the browser, as well as the different phones, the effect of access will not be the same.

Connect different browsers to access different effects through a browser.    HTTP {... upstream Firefox {server 172.31.57.133:80;    } upstream Chrome {server 172.31.57.133:8080;    } upstream iphone {server 172.31.57.134:8080;    } upstream Android {server 172.31.57.134:8081;    } upstream default {server 172.31.57.134:80; }...}    Server accesses different page server {Listen 80 according to the judgment;    server_name www.xuliangwei.com;        #safari浏览器访问的效果 location/{if ($http _user_agent ~* "Safari") {Proxy_pass http://dynamic_pools;        } #firefox浏览器访问效果 if ($http _user_agent ~* "Firefox") {Proxy_pass http://static_pools;        } #chrome浏览器访问效果 if ($http _user_agent ~* "Chrome") {Proxy_pass http://chrome;        } #iphone手机访问效果 if ($http _user_agent ~* "iphone") {Proxy_pass http://iphone;        } #android手机访问效果 if ($http _user_agent ~* "Android") {Proxy_pass http://and; } #其他浏览器访问默认规则 Proxy_pass http://dynamic_pools;        Include proxy.conf; }    }}

Proxies different servers depending on access to different directories

//默认动态,静态直接找设置的static,上传找upload    upstream static_pools {        server 10.0.0.9:80  weight=1;    }   upstream upload_pools {         server 10.0.0.10:80  weight=1;    }   upstream default_pools {         server 10.0.0.9:8080  weight=1;   }    server {        listen       80;        server_name  www.xuliangwei.com;#url: http://www.xuliangwei.com/    location / {         proxy_pass http://default_pools;        include proxy.conf;    }#url: http://www.xuliangwei.com/static/    location /static/ {        proxy_pass http://static_pools;        include proxy.conf;    }#url: http://www.xuliangwei.com/upload/    location /upload/ {        proxy_pass http://upload_pools;        include proxy.conf;    }}
//方案2:以if语句实现。if ($request_uri   ~*   "^/static/(.*)$"){        proxy_pass http://static_pools/$1;}if ($request_uri   ~*   "^/upload/(.*)$"){        proxy_pass http://upload_pools/$1;}location / {     proxy_pass http://default_pools;    include proxy.conf;}

Nginx Load Balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.