Brief introduction
There are many ways to improve server performance, such as dividing picture servers, master-slave database servers, and Web server servers. However, with limited hardware resources, the maximum performance of the squeezing server and the ability to handle the concurrency of the server are the problems that many operations technicians think about. To improve the load capacity under the Linux system, you can use Nginx and other native concurrent processing power of the Web server, if you use Apache can enable its worker mode, to improve its concurrency processing power. In addition, in consideration of cost savings, you can modify the kernel-related TCP parameters of Linux to maximize server performance. Of course, the most basic to improve the load problem, or upgrade the server hardware, this is the most fundamental.
Time_wait
Under Linux systems, when a TCP connection disconnects, it retains a certain amount of time in the TIME_WAIT state before the port is released. When there are too many concurrent requests, there will be a large number of time_wait state connections, which cannot be disconnected in time, and will consume a lot of port resources and server resources. At this point, we can optimize the TCP kernel parameters to clean up the port of the TIME_WAIT state in time.
The method described in this article only causes system resource consumption to be valid for connections that have a large number of time_wait states, and if not, the effect may not be obvious. You can use the netstat command to check the connection status of the Time_wait state, enter the following combination command to see the status of the current TCP connection and the corresponding number of connections:
Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, s[a]} ' |
This command will output a result similar to the following:
Last_ack 16
SYN_RECV 348
Established 70
Fin_wait1 229
Fin_wait2 30
CLOSING 33
Time_wait 18098
We only care about the number of time_wait, here we can see that there are more than 18,000 time_wait, so it occupies more than 18,000 ports. To know that the number of ports is only 65,535, taking one less, will seriously affect the subsequent new connections. In this case, it is necessary to adjust the TCP kernel parameters under Linux, so that the system can release the TIME_WAIT connection faster.
Open configuration file with vim: #vim/etc/sysctl.conf
In this file, add the following lines of content:
Net.ipv4.tcp_syncookies = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_fin_timeout = 30
Enter the following command for the kernel parameter to take effect: #sysctl-P
Simply describe the meaning of the above parameters:
Net.ipv4.tcp_syncookies = 1
#表示开启SYN Cookies. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;
Net.ipv4.tcp_tw_reuse = 1
#表示开启重用. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;
Net.ipv4.tcp_tw_recycle = 1
#表示开启TCP连接中TIME-wait Sockets Fast Recovery, default is 0, indicating off;
Net.ipv4.tcp_fin_timeout
#修改系統默认的 timeout time.
After this adjustment, in addition to further increase the load capacity of the server, but also to protect against small traffic levels of DOS, CC and SYN attacks.
In addition, if you have a large number of connections, we can optimize the TCP port range to further improve the concurrency of the server. Still go to the above parameter file, add the following configuration:
Net.ipv4.tcp_keepalive_time = 1200
Net.ipv4.ip_local_port_range = 10000 65000
Net.ipv4.tcp_max_syn_backlog = 8192
Net.ipv4.tcp_max_tw_buckets = 5000
#这几个参数, it is recommended to only open on servers with very large traffic, which can have significant effects. General traffic is small on the server, there is no need to set these several parameters.
Net.ipv4.tcp_keepalive_time = 1200
#表示当keepalive起用的时候, the frequency at which TCP sends keepalive messages. The default is 2 hours, which is changed to 20 minutes.
Net.ipv4.ip_local_port_range = 10000 65000
#表示用于向外连接的端口范围. Small by default: 32768 to 61000, 10000 to 65000. (Note: Do not set the minimum value too low, otherwise it may take off the normal port!) )
Net.ipv4.tcp_max_syn_backlog = 8192
#表示SYN队列的长度, the default is 1024, and the queue length is 8192, which can accommodate more network connections waiting to be connected.
Net.ipv4.tcp_max_tw_buckets = 6000
#表示系统同时保持TIME_WAIT的最大数量, if this number is exceeded, time_wait is immediately cleared and the warning message is printed. The default is 180000, which changes to 6000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for squid, the effect is not small. This parameter can control the maximum number of time_wait and avoid the squid server being dragged to death by a large number of time_wait.
Additional kernel TCP parameter description:
Net.ipv4.tcp_max_syn_backlog = 65536
#记录的那些尚未收到客户端确认信息的连接请求的最大值. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.
Net.core.netdev_max_backlog = 32768
#每个网络接口接收数据包的速率比内核处理这些包的速率快时, the maximum number of packets that are allowed to be sent to the queue.
Net.core.somaxconn = 32768
#web应用中listen函数的backlog默认会给我们内核参数的net. Core.somaxconn is limited to 128, and nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216 #最大socket读buffer, reference optimization value: 873200
Net.core.wmem_max = 16777216 #最大socket写buffer, reference optimization value: 873200
Net.ipv4.tcp_timestsmps = 0
#时间戳可以避免序列号的卷绕. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.
Net.ipv4.tcp_synack_retries = 2
#为了打开对端的连接, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.
Net.ipv4.tcp_syn_retries = 2
#在内核放弃建立连接之前发送SYN包的数量.
#net. Ipv4.tcp_tw_len = 1
Net.ipv4.tcp_tw_reuse = 1
# Turn on reuse. Allows time-wait sockets to be re-used for new TCP connections.
Net.ipv4.tcp_wmem = 8192 436600 873200
# TCP Write buffer, with reference to the optimized value: 8192 436600 873200
Net.ipv4.tcp_rmem = 32768 436600 873200
# TCP Read buffer, with reference to the optimized value: 32768 436600 873200
Net.ipv4.tcp_mem = 94500000 91500000 92700000
# There are also 3 values, meaning:
NET.IPV4.TCP_MEM[0]: Below this value, TCP has no memory pressure.
NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.
NET.IPV4.TCP_MEM[2]: Above this value, TCP refuses to allocate the socket.
The above memory units are pages, not bytes. A reference to the optimization value is: 786432 1048576 1572864
Net.ipv4.tcp_max_orphans = 3276800
#系统中最多有多少个TCP套接字不被关联到任何一个用户文件句柄上.
If this number is exceeded, the connection is immediately reset and a warning message is printed.
This limitation is only to prevent a simple Dos attack, not relying too much on it or artificially reducing the value,
This value should be increased (if memory is increased).
Net.ipv4.tcp_fin_timeout = 30
#如果套接字由本端要求关闭, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, you can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.
With such an optimized configuration, your server's TCP concurrency will increase significantly. The above configuration is for reference only, for the production environment please according to their actual situation.
Turn from
Linux (Centos) network kernel parameter optimization to improve server concurrency-CSDN blog
http://blog.csdn.net/shaobingj126/article/details/8549494
Linux TCPIP kernel parameter optimization
Turn from
Linux TCPIP kernel parameter optimization-Jessica program ape-Blog Park
Http://www.cnblogs.com/wuchanming/p/4028341.html
/proc/sys/net Directory
All TCP/IP parameters are located in the/proc/sys/net directory (note that modifications to the contents of the/proc/sys/net directory are temporary and any modifications are lost after the system restarts), such as the following important parameters:
parameters (Path + files) |
Describe |
Default value |
Optimized values |
/proc/sys/net/core/rmem_default |
The default TCP data Receive window size (in bytes). |
229376 |
256960 |
/proc/sys/net/core/rmem_max |
The largest TCP data Receive window (bytes). |
131071 |
513920 |
/proc/sys/net/core/wmem_default |
The default TCP data Send window Size (in bytes). |
229376 |
256960 |
/proc/sys/net/core/wmem_max |
The largest TCP data Send window (bytes). |
131071 |
513920 |
/proc/sys/net/core/netdev_max_backlog |
The maximum number of packets that are allowed to be sent to a queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes these packets. |
1000 |
2000 |
/proc/sys/net/core/somaxconn |
Defines the length of the maximum listening queue for each port in the system, which is a global parameter. |
128 |
2048 |
/proc/sys/net/core/optmem_max |
Represents the size of the maximum buffer allowed for each socket. |
20480 |
81920 |
/proc/sys/net/ipv4/tcp_mem |
Determine how the TCP stack should reflect memory usage, and each value is in a memory page (usually 4KB). The first value is the lower limit for memory usage, and the second value is the upper limit of the applied pressure that the memory pressure pattern begins to use for the buffer, and the third value is the upper limit for memory usage. At this level, messages can be discarded, thereby reducing the use of memory. For larger BDP, these values can be increased (note that the units are in memory pages and not bytes). |
94011 125351 188022 |
131072 262144 524288 |
/proc/sys/net/ipv4/tcp_rmem |
Defines the memory used by the socket for automatic tuning. The first value is the minimum number of bytes allocated for the socket receive buffer, the second value is the default value (the value is overwritten by Rmem_default), and the buffer can grow to this value if the system load is not heavy; the third value is the maximum number of bytes in the Receive buffer space (the value is Rmem_ Max overwrite). |
4096 87380 4011232 |
8760 256960 4088000 |
/proc/sys/net/ipv4/tcp_wmem |
Defines the memory used by the socket for automatic tuning. The first value is the minimum number of bytes allocated for the socket send buffer, the second value is the default value (the value is overwritten by Wmem_default), and the buffer can grow to this value if the system load is not heavy; the third value is the maximum number of bytes in the Send buffer space (this value is Wmem_ Max overwrite). |
4096 16384 4011232 |
8760 256960 4088000 |
/proc/sys/net/ipv4/tcp_keepalive_time |
The interval of time (in seconds) that TCP sends keepalive probe messages to confirm that the TCP connection is valid. |
7200 |
1800 |
/proc/sys/net/ipv4/tcp_keepalive_intvl |
When the probe message is not responding, the time interval (in seconds) for the message to be re-sent. |
75 |
30 |
/proc/sys/net/ipv4/tcp_keepalive_probes |
The maximum number of KeepAlive probe messages sent before the TCP connection is determined to fail. |
9 |
3 |
/proc/sys/net/ipv4/tcp_sack |
Enable selective answer (1 for enable), improve performance by selectively answering packets received by a random order, let the sender send only the missing segment, (for WAN communication) This option should be enabled, but will increase CPU usage. |
1 |
1 |
/proc/sys/net/ipv4/tcp_fack |
Enabling the forwarding answer enables selective response (SACK) to reduce congestion, which should also be enabled. |
1 |
1 |
/proc/sys/net/ipv4/tcp_timestamps |
The TCP timestamp (which increases by 12 bytes in the TCP header) enables the calculation of RTT to be enabled in a more precise way with a specific gravity timeout (refer to RFC 1323), and this option should be enabled for better performance. |
1 |
1 |
/proc/sys/net/ipv4/tcp_window_scaling |
enabling window scaling defined by RFC 1323, to support TCP windows exceeding 64KB, must be enabled (1 for Enable), and TCP Windows will take effect only when both sides of the 1GB,TCP connection are enabled. |
1 |
1 |
/proc/sys/net/ipv4/tcp_syncookies |
Indicates whether the TCP synchronization label (Syncookie) is turned on, the kernel must have the Config_syn_cookies key turned on to compile, and the synchronization label prevents a socket from overloading when there are too many attempts to connect. |
1 |
1 |
/proc/sys/net/ipv4/tcp_tw_reuse |
Indicates whether to allow sockets (time-wait ports) in the time-wait state to be used for new TCP connections. |
0 |
1 |
/proc/sys/net/ipv4/tcp_tw_recycle |
Time-wait sockets can be recycled more quickly. |
0 |
1 |
/proc/sys/net/ipv4/tcp_fin_timeout |
For the socket disconnected on this side, TCP remains in the Fin-wait-2 state for the time (in seconds). The other person may be disconnected or have not ended the connection or the unpredictable process has died. |
60 |
30 |
/proc/sys/net/ipv4/ip_local_port_range |
Represents the local port number that the TCP/UDP protocol allows to use |
32768 61000 |
1024 65000 |
/proc/sys/net/ipv4/tcp_max_syn_backlog |
The maximum number that can be saved in the queue for connection requests that have not yet been confirmed by the other. If the server is overloaded frequently, try increasing this number. |
2048 |
2048 |
/proc/sys/net/ipv4/tcp_low_latency |
This option should be disabled if the TCP/IP stack is allowed to accommodate low latency under high throughput conditions. |
0 |
|
/proc/sys/net/ipv4/tcp_westwood |
Enables the sender-side congestion control algorithm, which maintains the evaluation of throughput and attempts to optimize the overall utilization of bandwidth, which should be enabled for WAN traffic. |
0 |
|
/proc/sys/net/ipv4/tcp_bic |
Enabling binary increase congestion for fast, long-distance networks allows for better use of links that operate at GB speed, which should be enabled for WAN traffic. |
1 |
|
/etc/sysctl.conf file
/etc/sysctl.conf is an interface that allows you to change a running Linux system. It contains advanced options for the TCP/IP stack and virtual memory system, which can be used to control the Linux network configuration, because the/proc/sys/net directory content is temporary, it is recommended to add the TCPIP parameter modification to the/etc/sysctl.conf file, and then save the file, Use the command "/sbin/sysctl–p" to make it effective immediately. Specific modifications to the scheme are referred to above:
Net.core.rmem_default = 256960
Net.core.rmem_max = 513920
Net.core.wmem_default = 256960
Net.core.wmem_max = 513920
Net.core.netdev_max_backlog = 2000
Net.core.somaxconn = 2048
Net.core.optmem_max = 81920
Net.ipv4.tcp_mem = 131072 262144 524288
Net.ipv4.tcp_rmem = 8760 256960 4088000
Net.ipv4.tcp_wmem = 8760 256960 4088000
Net.ipv4.tcp_keepalive_time = 1800
NET.IPV4.TCP_KEEPALIVE_INTVL = 30
Net.ipv4.tcp_keepalive_probes = 3
Net.ipv4.tcp_sack = 1
Net.ipv4.tcp_fack = 1
Net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_syncookies = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_fin_timeout = 30
Net.ipv4.ip_local_port_range = 1024 65000
Net.ipv4.tcp_max_syn_backlog = 2048
Go to: http://www.cnblogs.com/fczjuever/archive/2013/04/17/3026694.html
Rules for determining the size of TCP and UDP socket-to-receive buffers under LINUX
1. TCP send and receive buffers default values
[Email protected] core]# Cat/proc/sys/net/ipv4/tcp_rmem
4096 87380 4161536
87380:tcp the default value of the receive buffer
[Email protected] core]# Cat/proc/sys/net/ipv4/tcp_wmem
4096 16384 4161536
16384:tcp default value of the Send buffer
2. TCP or UDP transmit/receive buffer maximum value
[Email protected] core]# Cat/proc/sys/net/core/rmem_max
131071
The 131071:TCP or UDP receive buffer can be half the maximum set value.
That means call setsockopt (S, Sol_socket, So_rcvbuf, &rcv_size, &optlen); If the rcv_size is more than 131071, then
GetSockOpt (S, Sol_socket, So_rcvbuf, &rcv_size, &optlen); The value to go to is equal to 131071 * 2 = 262142
[Email protected] core]# Cat/proc/sys/net/core/wmem_max
131071
The maximum set of 131071:TCP or UDP send buffers is worth half.
With the same principle as above
3. UDP transmit/Receive buffer default value
[Email protected] core]# Cat/proc/sys/net/core/rmem_default
111616:UDP the default value of the receive buffer
[Email protected] core]# Cat/proc/sys/net/core/wmem_default
111616
111616:UDP default value of the Send buffer
4. TCP or UDP transmit/receive buffer minimum value
The minimum value for the TCP or UDP receive buffer is bytes, which is determined by the kernel macro;
The minimum value for the TCP or UDP send buffer is 2048 bytes, which is determined by the kernel's macro
Optimize kernel TCP parameters under Linux to increase server load capacity-CSDN Blog
http://blog.csdn.net/opensure/article/details/46714793
Linux kernel optimization, kernel parameter details-CSDN blog
http://blog.csdn.net/killapper/article/details/46318529
Linux (Centos) network kernel parameter optimization to increase server concurrency processing power "go"