Linux kernel parameter tuning optimized network

Source: Internet
Author: User

Linux system kernel settings optimized TCP network, # vi/etc/sysctl.conf, add the following

Net.ipv4.tcp_syncookies = 1

indicates that SYN Cookies are turned on. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;
net.ipv4.tcp_tw_reuse = 1

means to turn on reuse. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;
net.ipv4.tcp_tw_recycle = 1

represents a quick recycle of time-wait sockets on a TCP connection, which defaults to 0, which means shutdown.
net.ipv4.tcp_fin_timeout =

indicates that if the socket is closed by a local requirement, this parameter determines when it remains in the fin-wait-2 state.
net.ipv4.tcp_keepalive_time =

indicates the frequency at which TCP sends keepalive messages when KeepAlive is employed. The default is 2 hours .

NET.IPV4.TCP_KEEPALIVE_INTVL =

net.ipv4.tcp_keepalive_probes = 3

The above two lines mean that if probe 3 times (30 seconds each time) is unsuccessful, the kernel will give up completely.

The original default value is, obviously too large:

Tcp_keepalive_time = 7200 seconds (2 hours)
Tcp_keepalive_probes = 9
TCP_KEEPALIVE_INTVL = seconds


Net.ipv4.ip_local_port_range = 1024x768 65000

represents the range of ports used for an outward connection. Small by default: 32768 to 61000, 1024 to 65000.
Net.ipv4.tcp_max_syn_backlog = 8192

represents the length of the SYN queue, which defaults to 1024, and a larger queue length of 8192, which can accommodate more network connections waiting to be connected.

net.ipv4.netdev_max_backlog = +

represents the largest device queue to enter the package, default 300, and larger
net.core.tcp_max_tw_buckets =
indicates that the system maintains the maximum number of time_wait sockets at the same time, and if this number is exceeded, the time_wait socket is immediately cleared and a warning message is printed. The default is 180000, which changes to 5000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for squid, the effect is not small. This parameter controls the maximum number of time_wait sockets, preventing squid servers from being dragged to death by a large number of time_wait sockets.


You can also refer to optimizing the kernel configuration:

/proc/sys/net/core/wmem_max Max socket write buffer, with reference to optimized values: 873200

/proc/sys/net/core/rmem_max maximum socket read buffer, with reference to the optimized value: 873200

/proc/sys/net/ipv4/tcp_wmem TCP Write buffer, with reference to the optimized value: 8192 436600 873200

/proc/sys/net/ipv4/tcp_rmem TCP read buffer with reference to optimized values: 32768 436600 873200

/proc/sys/net/ipv4/tcp_mem
There are also 3 values, meaning:
NET.IPV4.TCP_MEM[0]: Below this value, TCP has no memory pressure.
NET.IPV4.TCP_MEM[1]: Under this value, enter the memory pressure phase.
NET.IPV4.TCP_MEM[2]: Above this value, TCP refuses to allocate the socket.
The above memory units are pages, not bytes.
A reference to the optimization value is: 786432 1048576 1572864



/proc/sys/net/core/somaxconn
The default parameter of Listen (), the maximum number of pending requests. The default is 128. For busy servers, increasing this value helps network performance.
Can be adjusted to 256.

/proc/sys/net/core/optmem_max
The maximum initialization value for socket buffer, default 10K.



/proc/sys/net/ipv4/tcp_retries2
TCP failed retransmission, the default value of 15, meaning to focus on 15 times to completely discard. Reduce to 5 to release kernel resources as early as possible.


There is also an important parameter:Net.core.somaxconn represents the backlog cap for socket monitoring (listen), which is the listener queue for the socket, when a request has not been processed or established. He will go into the backlog. The socket server can process all requests in the backlog at once, and the processed requests are no longer in the listening queue. When the server processes the request so slowly that the listening queue is filled, the new request is rejected.

The default value of this parameter is 128. a high-burst request may cause the link to time out or trigger retransmission. For example, nginx definition ngx_listen_backlog default to 511, but because our parameters have not been optimized to limit to 128, it is obvious that this limit nginx backlog, need to optimize

Net.core.somaxconn = 32768

Linux kernel parameter tuning optimized network

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.