Socket splitting in Nginx 1.9.1

Source: Internet
Author: User

Socket splitting in Nginx 1.9.1

The 1.9.1 release of NGINX introduces a new feature that allows the use of the SO_REUSEPORT socket option, which is available in many new versions of the operating system, including DragonFly BSD and Linux (kernel Version 3.9 and later ). This socket option allows multiple sockets to listen to the combination of the same IP address and port. The kernel can load balance incoming connections in these sockets.

(For NGINX Plus customers, this function will be available in version 7 released by the end of the year)

The SO_REUSEPORT option has many potential applications. Other services can also be used to easily implement rolling upgrades during execution (Nginx has supported rolling upgrades through different methods ). For NGINX, enabling this option can reduce lock contention in some scenarios and improve performance.

As described in, when the SO_REUSEPORT option is valid, a separate listening socket notifies the Worker Process of the connection, and each worker thread tries to obtain the connection.

When the SO_REUSEPORT option is enabled, multiple socket listeners are bound to each IP address and port. Each worker process can allocate one socket listener. The system kernel determines which effective socket listener (which worker process is provided by implicit method) to obtain the connection. This can reduce the lock competition when the worker processes obtain new connections (Translator's note: The worker process requests obtain competition between mutex resource locks), and improve the performance in multi-core systems. However, this also means that when a working process is stuck in a blocking operation, blocking affects not only the working process that has accepted the connection, at the same time, it also makes the kernel block the working process that sends the connection request plan allocation.

Set shared Socket

To enable the SO_REUSEPORT socket option to take effect, the latest reuseport parameter should be directly introduced to the listen item in the HTTP or TCP (stream mode) communication option, as shown in the following example:

Http {
Server {listen 80 reuseport;
Server_name localhost;
...
}
}
 
Stream {
Server {listen 12345 reuseport;
...
}
}

After the reuseport parameter is referenced, The accept_mutex parameter for the referenced socket will be invalid because mutex is redundant for the reuseport. For ports that do not use reuseport, setting accept_mutex is still valuable.

Benchmark performance test of reuseport

I run the wrk benchmarking tool on a 36-core AWS instance to test 4 NGINX worker processes. to reduce the impact on the network, both the client and NGINX run locally, and NGINX returns an OK string instead of a file. I have compared three NGINX configurations: default (equivalent to accept_mutex on), accept_mutex off, And reuseport ., Reuseport requests per second are two to three times the remaining, and the latency and latency standard deviation are also reduced.

I ran another related performance test-the client and NGINX are on different machines and NGINX returns an HTML file. As shown in the following table, the latency reduced with reuseport is similar to the previous performance test, and the standard deviation of latency is more significant (close ). Other results (not shown in the table) are equally exciting. With the reuseport, the load is evenly isolated to the worker process. Under the default condition (equivalent to accept_mutex on), some workers receive a higher percentage of load, while all workers use accept_mutex off to receive a higher load.

  Latency (MS) Latency stdev (MS) CPU Load
Default 15.65 26.59 0.3
Accept_mutex off 15.59 26.48 10
Reuseport 12.35 3.15 0.3

In these performance tests, the connection request speed is very high, but the request does not need to be processed in large quantities. Other basic tests should point out that when the application traffic meets this scenario, the reuseport can also greatly improve the performance. (The reuseport parameter cannot be used under the listen command in the mail context, for example, email, because the email traffic will not match this scenario .) We encourage you to first test rather than directly use large-scale applications. For some tips on testing NGNIX performance, see the speech by Konstantin Pavlov at the nginx2014 conference.

Thank you

Thanks to Sepherosa Ziehau and Yingqi Lu, they have contributed a solution for the NGNIX project to use the socket option SO_REUSEPORT. The NGNIX team has integrated their contributions and ideas to create an ideal solution.

Deployment of Nginx + MySQL + PHP in CentOS 6.2

Build a WEB server using Nginx

Build a Web server based on Linux6.3 + Nginx1.2 + PHP5 + MySQL5.5

Performance Tuning for Nginx in CentOS 6.3

Configure Nginx to load the ngx_pagespeed module in CentOS 6.3

Install and configure Nginx + Pcre + php-fpm in CentOS 6.4

Nginx installation and configuration instructions

Nginx log filtering using ngx_log_if does not record specific logs

Nginx details: click here
Nginx: click here

Socket Sharding in NGINX Release 1.9.1

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.