Optimization of Linux system and kernel parameters under high concurrency

Source: Internet
Author: User
Tags ack epoll

It is well known that Linux support for high concurrency is not good in the case of default parameters, mainly limited by the maximum number of open files in a single process, kernel TCP parameters and IO event allocation mechanism. The following adjustments are made in several ways to enable Linux systems to support high concurrency environments.

Iptables related

If not, turn off or unload the iptables firewall and block kernel from loading the iptables module. These modules can affect concurrency performance.

Single-Process max Open file limit

General release, limit the maximum number of single processes can open 1024 files, which is far from meeting the high concurrency requirements, the adjustment process is as follows:

At the # prompt, typing:

# Ulimit–n 65535

The maximum number of files that can be opened for a single process that root starts is set to 65,535. If the system echoes similar to "operationnotpermitted", the above limitation modification fails, in effect because the value specified in is more than the soft or hard limit of the number of open files that the Linux system has on the user. Therefore, it is necessary to modify the Linux system's soft and hard limits on the number of open files to the user.

The first step is to modify the limits.conf file and add:

# vim/etc/security/limits.conf* Softnofile 65536* Hard nofile65536

Where the ' * ' indicates the restrictions on all users; soft or hard specifies whether to modify the soft or rigid limit; 65536 Specifies the new limit value that you want to modify, that is, the maximum number of open files (note that the soft limit value is less than or equal to the hard limit). Save the file when you are finished modifying it.

In the second step, modify the/etc/pam.d/login file to add the following line to the file:

# vim/etc/pam.d/loginsessionrequired/lib/security/pam_limits.so

This is to tell Linux that after the user completes the system login, the Pam_limits.so module should be called to set the system's maximum limit on the number of resources that the user can use, including the maximum number of files a user can open, and the pam_limits.so module from/etc/ The security/limits.conf file reads the configuration to set these throttling values. Save this file when you are finished modifying it.

The third step is to view the maximum number of open files at the Linux system level, using the following command:

# cat/proc/sys/fs/file-max32568

This indicates that the Linux system is allowed to open at most (that is, the total number of open files for all users) 32,568 files, is the Linux system-level hard limit, all user-level open files limit should not exceed this number. Typically, this system-level hard limit is the best maximum number of simultaneous open file limits that Linux systems will calculate at startup, based on the state of the system's hardware resources, and should not be modified unless you want to set a value that exceeds this limit for the user-level open files limit. The way to modify this hard limit is to modify the/etc/sysctl.conf file within fs.file-max= 131072

This is the hard limit for Linux to force the number of system-level open files to 131072 after boot is complete. Save this file when you are finished modifying it.

By rebooting the system after completing the above steps, it is generally possible to set the maximum number of files that the Linux system can allow to open simultaneously for a single process of a specified user to the specified number. If you use the Ulimit-n command to view the number of files that the user can open after restarting, the limit is still lower than the maximum value set in the previous steps, possibly because using the ulimit-n command in User logon script/etc/profile limits the number of files that the user can open simultaneously. Because modifying the system through Ulimit-n can limit the maximum number of files that a user can open at the same time, the newly modified value is only less than or equal to the value of the last Ulimit-n setting, so it is not possible to use this command to increase the limit value. Therefore, if there is such a problem, you can only open the/etc/profile script file, find out whether the file is used Ulimit-n limit the maximum number of files that the user can open at the same time, if found, delete this line of command, or set its value to the appropriate value, and then save the file, The user exits and logs back in to the system.

With the above steps, the system limits on the number of open files are lifted for communication handlers that support high concurrent TCP connection processing.

Kernel TCP parameter aspects

Under Linux systems, when a TCP connection disconnects, it retains a certain amount of time in the TIME_WAIT state before the port is released. When there are too many concurrent requests, there will be a large number of time_wait state connections, which cannot be disconnected in time, and will consume a lot of port resources and server resources. At this point, we can optimize the TCP kernel parameters to clean up the port of the TIME_WAIT state in time.

The methods described below only cause system resource consumption to be valid for connections that have a large time_wait state, and if not, the effect may not be obvious. You can use the netstat command to check the connection status of the Time_wait state, enter the following combination command to see the status of the current TCP connection and the corresponding number of connections:

# Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, s[a]} '

This command will output a result similar to the following:

last_ack16syn_recv348established70fin_wait1229fin_wait230closing33time_wait18098

We only care about the number of time_wait, here we can see that there are more than 18,000 time_wait, so it occupies more than 18,000 ports. To know that the number of ports is only 65,535, taking one less, will seriously affect the subsequent new connections. In this case, it is necessary to adjust the TCP kernel parameters under Linux, so that the system can release the TIME_WAIT connection faster.

Edit Profile:/etc/sysctl.conf

# vim/etc/sysctl.conf

In this file, add the following lines of content:

net.ipv4.tcp_syncookies= 1net.ipv4.tcp_tw_reuse= 1net.ipv4.tcp_tw_recycle= 1net.ipv4.tcp_fin_timeout= 30

Enter the following command to have the kernel parameters take effect:

# sysctl-p

Simply describe the meaning of the above parameters:

Net.ipv4.tcp_syncookies= 1

#表示开启SYNCookies. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;

Net.ipv4.tcp_tw_reuse= 1

#表示开启重用. Allows Time-waitsockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;

Net.ipv4.tcp_tw_recycle= 1

#表示开启TCP连接中TIME-waitsockets Fast Recovery, the default is 0, indicating off;

Net.ipv4.tcp_fin_timeout

#修改系統默认的TIMEOUT time.

After this adjustment, in addition to further increase the load capacity of the server, but also to protect against small traffic levels of DOS, CC and SYN attacks.

In addition, if you have a large number of connections, we can optimize the TCP port range to further improve the concurrency of the server. Still go to the above parameter file, add the following configuration:

Net.ipv4.tcp_keepalive_time= 1200net.ipv4.ip_local_port_range= 1024x768 65535net.ipv4.tcp_max_syn_backlog= 8192net.ipv4.tcp_max_tw_buckets= 5000

These parameters are recommended to be opened only on servers with very large traffic, which can have significant effects. General traffic is small on the server, there is no need to set these several parameters.

Net.ipv4.tcp_keepalive_time= 1200

#表示当keepalive起用的时候, the frequency at which TCP sends keepalive messages. The default is 2 hours, which is changed to 20 minutes.

net.ipv4.ip_local_port_range= 1024 65535

#表示用于向外连接的端口范围. By default, it is small and changes from 1024 to 65535.

net.ipv4.tcp_max_syn_backlog= 8192

#表示SYN队列的长度, the default is 1024, and the queue length is 8192, which can accommodate more network connections waiting to be connected.

Net.ipv4.tcp_max_tw_buckets= 5000

#表示系统同时保持TIME_WAIT的最大数量, if this number is exceeded, time_wait is immediately cleared and the warning message is printed. The default is 180000, which changes to 5000. This parameter can control the maximum number of time_wait, as long as it is exceeded.

Additional kernel TCP parameter description:

net.ipv4.tcp_max_syn_backlog= 65536

#记录的那些尚未收到客户端确认信息的连接请求的最大值. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.core.netdev_max_backlog= 32768

#每个网络接口接收数据包的速率比内核处理这些包的速率快时, the maximum number of packets that are allowed to be sent to the queue.

Net.core.somaxconn= 32768

#例如web应用中listen函数的backlog默认会给我们内核参数的net. Core.somaxconn is limited to 128, and nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

net.core.wmem_default= 8388608net.core.rmem_default= 8388608net.core.rmem_max= 16777216 #最大socket读buffer with reference to optimized values : 873200net.core.wmem_max= 16777216 #最大socket写buffer with reference to optimized values: 873200net.ipv4.tcp_timestsmps= 0

#时间戳可以避免序列号的卷绕. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.

Net.ipv4.tcp_synack_retries= 2

#为了打开对端的连接, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.

Net.ipv4.tcp_syn_retries= 2

#在内核放弃建立连接之前发送SYN包的数量.

#net. ipv4.tcp_tw_len= 1net.ipv4.tcp_tw_reuse= 1

# Turn on reuse. Allows Time-waitsockets to be re-used for new TCP connections.

net.ipv4.tcp_wmem= 8192 436600 873200

# TCP Write buffer, with reference to the optimized value: 8192 436600 873200

Net.ipv4.tcp_rmem = 32768 436600 873200

# TCP Read buffer, with reference to the optimized value: 32768 436600 873200

net.ipv4.tcp_mem= 94500000 91500000 92700000

# There are also 3 values, meaning:

NET.IPV4.TCP_MEM[0]: Below this value, TCP has no memory pressure.

NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.

NET.IPV4.TCP_MEM[2]: Above this value, TCP refuses to allocate the socket.

The above memory units are pages, not bytes. A reference to the optimization value is: 7864321048576 1572864

net.ipv4.tcp_max_orphans= 3276800

#系统中最多有多少个TCP套接字不被关联到任何一个用户文件句柄上.

If this number is exceeded, the connection is immediately reset and a warning message is printed.

This limitation is only to prevent a simple Dos attack, not relying too much on it or artificially reducing the value,

This value should be increased (if memory is increased).

net.ipv4.tcp_fin_timeout= 30

#如果套接字由本端要求关闭, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, you can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.

Also involves a TCP congestion algorithm problem, you can use the following command to view the Congestion algorithm control module provided by the machine:

Sysctlnet.ipv4.tcp_available_congestion_control

For the analysis of several algorithms, the details can be referred to the following: TCP congestion control algorithm advantages and disadvantages, applicable environment, performance analysis, such as high ductility can try Hybla, medium delay can be tried htcp algorithm.

If you want to set the TCP congestion algorithm to Hybla

Net.ipv4.tcp_congestion_control=hybla

Additional, for kernel version above 3.7.1, we can turn on Tcp_fastopen:

Net.ipv4.tcp_fastopen= 3

IO event allocation mechanism

To enable high concurrent TCP connections on Linux, you must verify that the application uses the appropriate network I/O technology and I/O event dispatch mechanisms. Available I/O technologies include synchronous I/O, non-blocking synchronous I/O, and asynchronous I/O. In the case of high TCP concurrency, if synchronous I/O is used, this can seriously block the operation of the program unless a thread is created for the I/O for each TCP connection. However, too many threads can cause significant overhead for the system's scheduling of threads. Therefore, it is undesirable to use synchronous I/O in cases of high TCP concurrency, when you consider using non-blocking synchronous I/O or asynchronous I/O. Non-blocking synchronous I/O techniques include the use of Select (), poll (), Epoll, and so on. The technique of asynchronous I/O is to use AIO.

From the I/O event dispatch mechanism, it is inappropriate to use Select () because it supports a limited number of concurrent connections (typically within 1024). If you consider performance, poll () is also inappropriate, although it can support a higher number of TCP concurrency, but because of its "polling" mechanism, when the number of concurrent high, its efficiency is very low, and there may be an I/O event dispatch uneven, causing the I/O on some TCP connections "hunger" phenomenon. If you use Epoll or AIO, there is no such problem (the AIO technology implementation of the earlier Linux kernel is implemented by creating a thread for each I/O request in the kernel, which in fact has a serious performance problem with high concurrent TCP connections.) However, the implementation of AIO has been improved in the latest Linux kernel.

In summary, when developing Linux applications that support high concurrent TCP connections, you should try to use Epoll or AIO technology to implement I/O control on concurrent TCP connections, which provides an effective I/O guarantee for the boost program's support for high concurrent TCP connections.

after such an optimized configuration, the server's TCP concurrency processing capability is significantly increased. The above configuration is for reference only, for production environment Please adjust the observation and adjustment according to your actual situation.

This article is from the "XUJPXM" blog, make sure to keep this source http://xujpxm.blog.51cto.com/8614409/1958881

Optimization of Linux system and kernel parameters in high concurrency

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.