CentOS Kernel parameter performance optimization

Source: Internet
Author: User
Tags ack

Summary: Introduction to improve server performance There are many ways, such as partition picture server, master and slave database server, and Web server in the server. However, with limited hardware resources, the maximum performance of the squeezing server and the ability to handle the concurrency of the server are the problems that many operations technicians think about. To improve the load capacity under the Linux system, you can use Nginx and other native concurrent processing power of the Web server, if you use Apache can enable its worker mode, to improve its concurrency processing power. In addition, in consideration of cost savings, you can modify the kernel-related TCP parameters of Linux to maximize server performance. Of course, the most basic improvement
Brief introduction

There are many ways to improve server performance, such as dividing picture servers, master-slave database servers, and Web server servers. However, with limited hardware resources, the maximum performance of the squeezing server and the ability to handle the concurrency of the server are the problems that many operations technicians think about. To improve the load capacity under the Linux system, you can use Nginx and other native concurrent processing power of the Web server, if you use Apache can enable its worker mode, to improve its concurrency processing power. In addition, in consideration of cost savings, you can modify the kernel-related TCP parameters of Linux to maximize server performance. Of course, the most basic to improve the load problem, or upgrade the server hardware, this is the most fundamental.

Time_wait

Under Linux systems, when a TCP connection disconnects, it retains a certain amount of time in the TIME_WAIT state before the port is released. When there are too many concurrent requests, there will be a large number of time_wait state connections, which cannot be disconnected in time, and will consume a lot of port resources and server resources. At this point, we can optimize the TCP kernel parameters to clean up the port of the TIME_WAIT state in time.

The method described in this article only causes system resource consumption to be valid for connections that have a large number of time_wait states, and if not, the effect may not be obvious. You can use the netstat command to check the connection status of the Time_wait state, enter the following combination command to see the status of the current TCP connection and the corresponding number of connections:

Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, s[a]} '

This command will output a result similar to the following:

Last_ack 16

SYN_RECV 348

Established 70

Fin_wait1 229

Fin_wait2 30

CLOSING 33

Time_wait 18098

We only care about the number of time_wait, here we can see that there are more than 18,000 time_wait, so it occupies more than 18,000 ports. To know that the number of ports is only 65,535, taking one less, will seriously affect the subsequent new connections. In this case, it is necessary to adjust the TCP kernel parameters under Linux, so that the system can release the TIME_WAIT connection faster.

Open configuration file with vim: #vim/etc/sysctl.conf

In this file, add the following lines of content:

Net.ipv4.tcp_syncookies = 1

Net.ipv4.tcp_tw_reuse = 1

Net.ipv4.tcp_tw_recycle = 1

Net.ipv4.tcp_fin_timeout = 30

Enter the following command for the kernel parameter to take effect: #sysctl-P

Simply describe the meaning of the above parameters:

Net.ipv4.tcp_syncookies = 1

#表示开启SYN Cookies. When there is a SYN wait queue overflow, the cookie is enabled to handle, can prevent a small number of syn***, the default is 0, indicating off;

Net.ipv4.tcp_tw_reuse = 1

#表示开启重用. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;

Net.ipv4.tcp_tw_recycle = 1

#表示开启TCP连接中TIME-wait Sockets Fast Recovery, default is 0, indicating off;

Net.ipv4.tcp_fin_timeout

#修改系默认的 timeout time.

After this adjustment, in addition to further increase the load capacity of the server, but also to protect against small traffic levels of DOS, CC and syn***.

In addition, if you have a large number of connections, we can optimize the TCP port range to further improve the concurrency of the server. Still go to the above parameter file, add the following configuration:

Net.ipv4.tcp_keepalive_time = 1200

Net.ipv4.ip_local_port_range = 10000 65000

Net.ipv4.tcp_max_syn_backlog = 8192

Net.ipv4.tcp_max_tw_buckets = 5000

#这几个参数, it is recommended to only open on servers with very large traffic, which can have significant effects. General traffic is small on the server, there is no need to set these several parameters.

Net.ipv4.tcp_keepalive_time = 1200

#表示当keepalive起用的时候, the frequency at which TCP sends keepalive messages. The default is 2 hours, which is changed to 20 minutes.

Net.ipv4.ip_local_port_range = 10000 65000

#表示用于向外连接的端口范围. Small by default: 32768 to 61000, 10000 to 65000. (Note: Do not set the minimum value too low, otherwise it may take off the normal port!)

Net.ipv4.tcp_max_syn_backlog = 8192

#表示SYN队列的长度, the default is 1024, and the queue length is 8192, which can accommodate more network connections waiting to be connected.

Net.ipv4.tcp_max_tw_buckets = 6000

#表示系统同时保持TIME_WAIT的最大数量, if this number is exceeded, time_wait is immediately cleared and the warning message is printed. The default is 180000, which changes to 6000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for squid, the effect is not small. This parameter can control the maximum number of time_wait and avoid the squid server being dragged to death by a large number of time_wait.

Additional kernel TCP parameter description:

Net.ipv4.tcp_max_syn_backlog = 65536

#记录的那些尚未收到客户端确认信息的连接请求的最大值. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.core.netdev_max_backlog = 32768

#每个网络接口接收数据包的速率比内核处理这些包的速率快时, the maximum number of packets that are allowed to be sent to the queue.

Net.core.somaxconn = 32768

#web应用中listen函数的backlog默认会给我们内核参数的net. Core.somaxconn is limited to 128, and nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

Net.core.wmem_default = 8388608

Net.core.rmem_default = 8388608

Net.core.rmem_max = 16777216 #最大socket读buffer, reference optimization value: 873200

Net.core.wmem_max = 16777216 #最大socket写buffer, reference optimization value: 873200

Net.ipv4.tcp_timestsmps = 0

#时间戳可以避免序列号的卷绕. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.

Net.ipv4.tcp_synack_retries = 2

#为了打开对端的连接, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.

Net.ipv4.tcp_syn_retries = 2

#在内核放弃建立连接之前发送SYN包的数量.

#net. Ipv4.tcp_tw_len = 1

Net.ipv4.tcp_tw_reuse = 1

#开启重用, allow time-wait sockets to be re-used for new TCP connections

#TCP读buffer, a reference to the optimization value: 32768 436600 873200
Net.ipv4.tcp_wmem = 8192 436600 873200

Net.ipv4.tcp_rmem = 32768 436600 873200
#TCP读buffer, a reference to the optimization value: 32768 436600 873200

Net.ipv4.tcp_mem = 94500000 91500000 92700000

#同样有3个值, meaning:

NET.IPV4.TCP_MEM[0]: Below this value, TCP has no memory pressure.

NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.

NET.IPV4.TCP_MEM[2]: Above this value, TCP refuses to allocate the socket.

The above memory units are pages, not bytes. A reference to the optimization value is: 786432 1048576 1572864

Net.ipv4.tcp_max_orphans = 3276800

#系统中最多有多少个TCP套接字不被关联到任何一个用户文件句柄上.

If this number is exceeded, the connection is immediately reset and a warning message is printed.

This limitation is only to prevent simple dos***, not to rely too much on it or artificially reduce the value,

This value should be increased (if memory is increased).

Net.ipv4.tcp_fin_timeout = 30

#如果套接字由本端要求关闭, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, you can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.

With such an optimized configuration, your server's TCP concurrency will increase significantly. The above configuration is for reference only, for the production environment please according to their actual situation.

The above is the Linux (Centos) network kernel parameter optimization to improve the server concurrency processing power content, more concurrent kernel optimization parameter capabilities to improve the server processing Centos Linux content, please use the upper right search function to obtain relevant information.

CentOS Kernel parameter performance optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.