Linux system kernel settings optimized TCP network, # vi/etc/sysctl.conf, add the following
Net.ipv4.tcp_syncookies = 1
Indicates that SYN Cookies are turned on. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;
Net.ipv4.tcp_tw_reuse = 1
means to turn on reuse. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;
Net.ipv4.tcp_tw_recycle = 1
Represents a quick recycle of time-wait sockets on a TCP connection, which defaults to 0, which means shutdown.
Net.ipv4.tcp_fin_timeout = 30
Indicates that if the socket is closed by a local requirement, this parameter determines when it remains in the fin-wait-2 state.
Net.ipv4.tcp_keepalive_time = 1200
Indicates the frequency at which TCP sends keepalive messages when KeepAlive is employed. The default is 2 hours.
NET.IPV4.TCP_KEEPALIVE_INTVL = 30
Net.ipv4.tcp_keepalive_probes = 3
The above two lines mean that if probe 3 times (30 seconds each time) is unsuccessful, the kernel will give up completely.
The original default value is, obviously too large:
Tcp_keepalive_time = 7200 seconds (2 hours)
Tcp_keepalive_probes = 9
TCP_KEEPALIVE_INTVL = Seconds
Net.ipv4.ip_local_port_range = 1024 65000
Represents the range of ports used for an outward connection. Small by default: 32768 to 61000, 1024 to 65000.
Net.ipv4.tcp_max_syn_backlog = 8192
Represents the length of the SYN queue, which defaults to 1024, and a larger queue length of 8192, which can accommodate more network connections waiting to be connected.
Net.ipv4.netdev_max_backlog = 1000
Represents the largest device queue to enter the package, default 300, and larger
Net.core.tcp_max_tw_buckets = 5000
Indicates that the system maintains the maximum number of time_wait sockets at the same time, and if this number is exceeded, the time_wait socket is immediately cleared and a warning message is printed. The default is 180000, which changes to 5000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for squid, the effect is not small. This parameter controls the maximum number of time_wait sockets, preventing squid servers from being dragged to death by a large number of time_wait sockets.
You can also refer to optimizing the kernel configuration:
/proc/sys/net/core/wmem_max Max socket write buffer, with reference to optimized values: 873200
/proc/sys/net/core/rmem_max maximum socket read buffer, with reference to the optimized value: 873200
/proc/sys/net/ipv4/tcp_wmem TCP Write buffer, with reference to the optimized value: 8192 436600 873200
/proc/sys/net/ipv4/tcp_rmem TCP read buffer with reference to optimized values: 32768 436600 873200
/proc/sys/net/ipv4/tcp_mem
There are also 3 values, meaning:
NET.IPV4.TCP_MEM[0]: Below this value, TCP has no memory pressure.
NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.
NET.IPV4.TCP_MEM[2]: Above this value, TCP refuses to allocate the socket.
The above memory units are pages, not bytes.
A reference to the optimization value is: 786432 1048576 1572864
/proc/sys/net/core/somaxconn
The default parameter of Listen (), the maximum number of pending requests. The default is 128. For busy servers, increasing this value helps network performance.
Can be adjusted to 256.
/proc/sys/net/core/optmem_max
The maximum initialization value for socket buffer, default 10K.
/proc/sys/net/ipv4/tcp_retries2
TCP failed retransmission, the default value of 15, meaning to focus on 15 times to completely discard. Reduce to 5 to release kernel resources as early as possible.
There is also an important parameter: Net.core.somaxconn represents the backlog cap for socket monitoring (listen), which is the listener queue for the socket, and when a request has not been processed or established, he enters the backlog. The socket server can process all requests in the backlog at once, and the processed requests are no longer in the listening queue. When the server processes the request so slowly that the listening queue is filled, the new request is rejected.
The default value of this parameter is 128. A high-burst request may cause the link to time out or trigger retransmission. For example, nginx definition ngx_listen_backlog default to 511, but because our parameters have not been optimized to limit to 128, it is obvious that this limit nginx backlog, need to optimize
Net.core.somaxconn = 32768
Kernel parameters related to "Linux" "Basis" "Network" network