Modifying the Linux kernel parameters to improve the concurrency performance of Nginx server

Source: Internet
Author: User
Tags nginx server server memory dmesg

====================================================

When the Linux nginx reaches a high number of concurrency, the number of TCP time_wait sockets often reaches 20,000 or 30,000, so the server can easily be dragged to death.

In fact, we can simply modify the Linux kernel parameters, can reduce the number of time_wait sockets of Nginx server, and thus improve the performance of Nginx server concurrency.
Vi/etc/sysctl.conf
Add the following lines:
Net.ipv4.tcp_fin_timeout = 30
Net.ipv4.tcp_keepalive_time = 1200

Net.ipv4.tcp_syncookies = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.ip_local_port_range = 1024 65000
Net.ipv4.tcp_max_syn_backlog = 8192
Net.ipv4.tcp_max_tw_buckets = 5000 Simple Description:
Net.ipv4.tcp_syncookies = 1 means that Syn Cookies are turned on. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;
Net.ipv4.tcp_tw_reuse = 1 means turn on reuse. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;
Net.ipv4.tcp_tw_recycle = 1 means a fast recycle of time-wait sockets in the TCP connection is turned on, and the default is 0, which means shutdown.
Net.ipv4.tcp_fin_timeout = 30 means that if the socket is closed by the local side, this parameter determines how long it remains in the fin-wait-2 state.
Net.ipv4.tcp_keepalive_time = 1200 indicates the frequency at which TCP sends keepalive messages when KeepAlive is employed. The default is 2 hours, which is changed to 20 minutes.
Net.ipv4.ip_local_port_range = 1024 65000 indicates the range of ports used for an outward connection. Small by default: 32768 to 61000, 1024 to 65000.
Net.ipv4.tcp_max_syn_backlog = 8192 Indicates the length of the SYN queue, the default is 1024, and the queue length is 8192, which can accommodate more network connections waiting to be connected.
Net.ipv4.tcp_max_tw_buckets = 5000 indicates that the system maintains the maximum number of time_wait sockets at the same time, and if this number is exceeded, the time_wait socket is immediately cleared and a warning message is printed. The default is 180000, which changes to 5000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for squid, the effect is not small. This parameter controls the maximum number of time_wait sockets, preventing squid servers from being dragged to death by a large number of time_wait sockets.

echo "====================== executes the following command to make the configuration effective: ========================="
#更改linux内核参数后, immediate effect of the order!

/sbin/sysctl-p

Nginx optimization

Using the FASTCGI cache
Fastcgi_cache TEST
Turn on the fastcgi cache and set a name for it. The personal sense of unlocking the cache is useful to reduce CPU load and prevent 502 errors.

Fastcgi_cache_path/usr/local/nginx/fastcgi_cache Levels=1:2
keys_zone=test:10m
inactive=5m;
This directive specifies a path for the fastcgi cache, directory structure level, key area storage time, and inactive delete time

Other instructions

Nginx was developed by Igor Sysoev, the second-most visited rambler.ru site in Russia, which has already run more than 2.5 of the site. Igor release the source code in the form of a BSD-like license.

In the case of high concurrent connection, Nginx is a good substitute for Apache server. Nginx can also be used as a 7-tier load balancer server. According to my test results, Nginx 0.6.31 + PHP 5.2.6 (FastCGI) can withstand more than 30,000 concurrent connections, equivalent to 10 times times Apache in the same environment.

In my experience, the server +apache (prefork mode) of 4GB memory typically handles only 3,000 concurrent connections because they will consume more than 3GB of memory and have to reserve 1GB of memory for the system. I used to have two Apache servers, because the maxclients set in the configuration file is 4000, when the number of Apache concurrent connections reached 3800, causing the server memory and swap space to be full and crashed.

And this nginx 0.6.31 + PHP 5.2.6 (FastCGI) server in 30,000 concurrent connection, open 10 Nginx process consumes 150M memory (15m*10=150m), open 64 php-cgi process consumes 1280M of memory (20m*64 =1280m), combined with the memory consumed by the system itself, consumes less than 2GB of memory. If the server memory is small, you can only open 25 php-cgi processes, so that the total amount of memory consumed by php-cgi is 500M.

========================================

The server adjusts the parameters of the system in/etc/sysctl.conf:

Net.core.somaxconn = 2048
Net.core.rmem_default = 262144
Net.core.wmem_default = 262144
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.ipv4.tcp_rmem = 4096 4096 16777216
Net.ipv4.tcp_wmem = 4096 4096 16777216
Net.ipv4.tcp_mem = 786432 2097152 3145728
Net.ipv4.tcp_max_syn_backlog = 16384
Net.core.netdev_max_backlog = 20000
Net.ipv4.tcp_fin_timeout = 15
Net.ipv4.tcp_max_syn_backlog = 16384
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_max_orphans = 131072

/sbin/sysctl-p Effective

Mainly look at these items:

NET.IPV4.TCP_RMEM is used to configure the read buffer size, three values, the first is the minimum value of the read buffer, the third is the maximum value, the middle is the default value. We can modify the read buffer size in the program, but not exceed the minimum and maximum. To minimize the amount of memory used by each socket, I set the default value here to 4096.
The NET.IPV4.TCP_WMEM is used to configure the write buffer size.
The read buffer and write buffer in size directly affect the memory footprint of the socket in the kernel.
The NET.IPV4.TCP_MEM is the memory size of TCP, which is the page, not the byte. When the second value is exceeded, TCP enters pressure mode, at which time TCP attempts to stabilize its use of memory, and exits pressure mode when it is less than the first value. When the memory consumption exceeds the third value, TCP will refuse to allocate the socket, view DMESG, will play a lot of logs "tcp:too many of orphaned sockets".
Also net.ipv4.tcp_max_orphans this value to set, this value means that the system can handle the number of sockets that are not part of any process, when we need to quickly establish a large number of connections, we need to pay attention to this value. When the number of sockets that are not part of any process is greater than this value, DMESG will see "Too many of orphaned sockets".

=======================================

Modifying the Linux kernel parameters to improve the concurrency performance of Nginx server

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.