"Important" Nginx implements HTTP load balancing and TCP load Balancing

Source: Internet
Author: User
Tags server port

Description: very simple one in the HTTP module, while the other and HTTP is a parallel stream module (Nginx 1.9.0 support)

The simplest configuration of the first two modules is as follows 1. HTTP Load balancing:
http {include mime.types; Default_type Application/octet-stream; Upstream Live_node {Server127.0.0.1:8089; Server127.0.0.1:8088; } Server {Listen the;        server_name localhost; Location/{root/usr/Local/nginx/html; Index index.html index.htm;        }    }
server {Listen8088; server_name localhost; Location/ {Root/usr/Local/nginx/HTML2; Index index.html index.htm;}} server {Listen8089; server_name localhost; Location/{root/usr/Local/nginx/HTML3; Index index.html index.htm; } } }

The above tests a browser to access 80 ports, implementing access to different servers.

2. TCP Load Balancing:
Stream {    upstream rtmp {        127.0.  0.1:8089; # This is configured to access the address         127.0.0.2:1935;         127.0.0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935    }        server {1935;  # The port to be monitored        proxy_timeout 20s;        Proxy_pass rtmp;    }}

The above can be achieved by a simple rtmp stream forwarding.

Ii. detailed explanation of TCP load balancing and HTTP load balancing 1. TCP Load Balancing:

  nginx-1.9.0 has been released, this version adds the stream module for general TCP proxies and load balancing,ngx_stream_core_module This module will be enabled after version 1.90. However, this module is not installed by default and needs to be activated at compile time by specifying the--with-stream parameter.

(1) Configuring Nginx compiler file Parameters

-- With-http_stub_status_module--with-stream

(2) Compiling, installing, Make,make install.

(3) Configuring the nginx.conf file

Stream {    upstream rtmp {        127.0.  0.1:8089; # This is configured as the address        to be accessed 127.0. 0.2:1935;         127.0. 0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935    }        server {1935;  # the port to be monitored        proxy_timeout 20s;        Proxy_pass rtmp;    }}

Create the highest stream (same level as HTTP), define a upstream group name of Rtmp, set up by multiple services to achieve load balancing define a service to listen to TCP connections (such as: 1935 ports) and proxy them to a upstream group of rtmp , configure load balancing methods and parameters for each server, and configure some such as: number of connections, weights, and so on.

First, create a server group to use as a TCP load balancing group. Defines a upstream block in the stream context, in which the server defined by the server command is added, specifying his IP address and host name (which can be resolved to a multi-address host name) and port number. The following example builds a server called the RTMP group, two listening 1395 ports, and a server that listens on 8089 ports.

upstream rtmp {        127.0.  0.1:8089; # This is configured as the address        to be accessed 127.0. 0.2:1935;         127.0. 0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935    }

Note: You cannot define the protocol for each server because this stream command establishes TCP as the protocol for the entire server .

Configuring the reverse proxy enables Nginx to forward TCP requests from one client to a Load Balancer group (for example, RTMP group). Configuration information and proxy_passs for each server configuration block through the server configuration information for each virtual server and the listening ports defined in each server (the proxy port number for client requirements, such as the RTMP protocol that I am streaming, the port number is: 1935) command to send TCP traffic to which server in the upstream. Below we send the TCP traffic to the RTMP group.

  server {        1935;  # the port to be monitored        proxy_timeout 20s;        Proxy_pass rtmp;    }

Of course, we can also use a single proxy way:

server {        1935;  # the port to be monitored        proxy_timeout 20s;        Proxy_pass  127.0.0.3:1935; #需要代理的端口, I'm here to represent the interface of one by one rtmp modules 1935
}

(4) How to change load balancing:

The default nginx is a load-balanced communication through the polling algorithm. Boot this request loop to the server port configured in the upstream group. Because he is the default method, there is no polling command here, just simply create a upstream configuration group here in Stream Hill below, and add the server in it.

1. least-connected: for each request, Nginx plus selects the server with the least current number of connections to handle:

 least_conn;         127.0. 0.1:8089; # This is configured as the address        to be accessed 127.0. 0.2:1935;         127.0. 0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935    }

2, least time: for each link, Nginx pluns through a few points to select the server: The bottom average delay: calculated by including the parameters specified in the least_time command:

    • Connect : The time it takes to connect to a server
    • First_byte: The time that the first byte was received
    • last_byte: The minimum number of active connections that have been received at all times:
upstream rtmp { least_time first_byte;         127.0. 0.1:8089; # This is configured as the address        to be accessed 127.0. 0.2:1935;         127.0. 0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935    }

3, the ordinary hash algorithm:nginx Plus Select this server is through the user_defined keyword, that is, IP address: $remote _addr;

Upstream rtmp {
    hash $remote _addr consistent; 127.0. 0.1:8089; # This is configured as the address to be accessed 127.0. 0.2:1935; 127.0. 0.3:1935; # need proxy port, here I proxy one by one rtmp module interface 1935 }

2. HTTP Load balancing:

Reference article:

http://freeloda.blog.51cto.com/2033581/1288553

"Important" Nginx implements HTTP load balancing and TCP load Balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.