A method for configuring load balancing for TCP in a nginx server _nginx

Source: Internet
Author: User
Tags hash mssql mysql query prepare nginx server

The default nginx does not support TCP load balancing, need to be patched, (connect by: Receive a connection from the client, will initiate a new connection from the local to the back-end server), the specific configuration is as follows:

I. Installation of Nginx
1. Download Nginx

# wget http://nginx.org/download/nginx-1.2.4.tar.gz

2. Download TCP Module patch

# wget Https://github.com/yaoweibin/nginx_tcp_proxy_module/tarball/master

SOURCE page: Https://github.com/yaoweibin/nginx_tcp_proxy_module

3. Install Nginx

# tar xvf nginx-1.2.4.tar.gz
# tar xvf yaoweibin-nginx_tcp_proxy_module-v0.4-45-ga40c99a.tar.gz
# CD nginx-1.2.4
# Patch-p1 < ... /yaoweibin-nginx_tcp_proxy_module-a40c99a/tcp.patch
#./configure--prefix=/usr/local/nginx--with-pcre=. /pcre-8.30--add-module=. /yaoweibin-nginx_tcp_proxy_module-ae321fd/
# make
# make install

Second, modify the configuration file
Modifying nginx.conf configuration files

# cd/usr/local/nginx/conf
# vim nginx.conf
Worker_processes 1;
Events {
worker_connections 1024;
}

TCP {
upstream MSSQL {
server 10.0.1.201:1433;
Server 10.0.1.202:1433;
Check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen 1433;
server_name 10.0.1.212;
Proxy_pass mssql;
}
}

Third, start Nginx

# cd/usr/local/nginx/sbin/
#/nginx.

View Port 1433:

#lsof: 1433

Four, test

# telnet 10.0.1.201 1433

V. Using the SQL Server Client tool test

VI. implementation principle of TCP load Balancing
when Nginx receives a new client link from the listening port, it immediately executes the routing algorithm, obtains the service IP that specifies the connection required, and then creates a new upstream connection to the specified server.

TCP load Balancing supports Nginx's original scheduling algorithms, including round Robin (default, polling scheduling), hash (select consistent), and so on. At the same time, the scheduling information data will also collaborate with the Robustness detection module to select the appropriate target upstream server for each connection. If you use a hash load-balanced scheduling method, you can use $REMOTE_ADDR (client-side IP) to achieve a simple persistence session (the same client IP connection always falls on the same service server).

As with other upstream modules, the TCP stream module also supports the forwarding weights (configuration "weight=2") of both the custom load and the backup and down parameters for kicking off the failed upstream server. The Max_conns parameter can limit the number of TCP connections for a server, and set the appropriate configuration values based on the capacity of the server, especially in high concurrency scenarios, to achieve overload protection.

Nginx monitors client connections and upstream connections, and once the data is received, Nginx reads and pushes the upstream connection immediately and does not do data detection within the TCP connection. Nginx maintains a memory buffer for writing client and upstream data. If the client or service side transmits a large amount of data, the buffer will increase the size of the memory appropriately.

The connection will be closed when the Nginx receives a close connection notification from either side, or if the TCP connection is idle over the proxy_timeout configured time. For TCP long connections, we should choose the appropriate proxy_timeout time, while focusing on monitoring the so_keepalive parameters of Socke to prevent premature disconnection.

PS: Robust monitoring of services

The TCP load Balancing module supports built-in robustness detection, and an upstream server will be considered invalidated if it rejects a TCP connection that exceeds the Proxy_connect_timeout configuration time. In this case, Nginx immediately attempts to connect to another normal server within the upstream group. The connection failure information will be logged in the Nginx error log.

If a server fails repeatedly (exceeding the parameters of Max_fails or fail_timeout configuration), Nginx will also kick off the server. After the server has been kicked off for 60 seconds, Nginx will occasionally try to connect it again to detect if it is back to normal. If the server returns to normal, Nginx adds it back to the upstream group, slowly increasing the proportion of the connection request.

It "slowly increases", because usually a service has "hot data", that is to say, more than 80% or more requests, will actually be blocked in the "Hot data cache", the real implementation of processing requests only a small part. When the machine just started, "Hot data cache" has not actually been established, this time the launch of a large number of requests, it is likely to cause the machine can not "withstand" and hang again. To MySQL as an example, our MySQL query, usually more than 95% are falling in the memory cache, the actual implementation of the query is not much.

In fact, whether it is a single machine or a cluster, in the high concurrent request scenario, restart or switch, there are the risks, the main solution is two kinds:

(1) The request gradually increases, from less to many, accumulates the hot spot data gradually, finally achieves the normal service state.
(2) Prepare the "commonly used" data in advance, take the initiative to "preheat" the service, and then open the server's access after the warm-up is completed.

TCP load Balancing principle and LVS are consistent, work at the bottom, performance will be higher than the original HTTP load balance a lot. However, will not be more outstanding than LVS, LVS is placed in the kernel module, and nginx work in the user state, and, nginx relatively heavy. Another point, it is very regrettable, this module is actually a pay function.

The TCP load Balancing module supports built-in robustness detection, and an upstream server will be considered invalidated if it rejects a TCP connection that exceeds the Proxy_connect_timeout configuration time. In this case, Nginx immediately attempts to connect to another normal server within the upstream group. The connection failure information will be logged in the Nginx error log.

If a server fails repeatedly (exceeding the parameters of Max_fails or fail_timeout configuration), Nginx will also kick off the server. After the server has been kicked off for 60 seconds, Nginx will occasionally try to connect it again to detect if it is back to normal. If the server returns to normal, Nginx adds it back to the upstream group, slowly increasing the proportion of the connection request.

It "slowly increases", because usually a service has "hot data", that is to say, more than 80% or more requests, will actually be blocked in the "Hot data cache", the real implementation of processing requests only a small part. When the machine just started, "Hot data cache" has not actually been established, this time the launch of a large number of requests, it is likely to cause the machine can not "withstand" and hang again. To MySQL as an example, our MySQL query, usually more than 95% are falling in the memory cache, the actual implementation of the query is not much.

In fact, whether it is a single machine or a cluster, in the high concurrent request scenario, restart or switch, there are the risks, the main solution is two kinds:

(1) The request gradually increases, from less to many, accumulates the hot spot data gradually, finally achieves the normal service state.
(2) Prepare the "commonly used" data in advance, take the initiative to "preheat" the service, and then open the server's access after the warm-up is completed.

TCP load Balancing principle and LVS are consistent, work at the bottom, performance will be higher than the original HTTP load balance a lot. However, will not be more outstanding than LVS, LVS is placed in the kernel module, and nginx work in the user state, and, nginx relatively heavy. Another point, it is very regrettable, this module is actually a pay function.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.