Brief introduction of the effect and optimization of TCP protocol on HTTP performance

Source: Internet
Author: User
Tags rfc sendfile nginx server

When a site server is connected to a certain level of concurrency, you may want to consider the effect of TCP protocol settings on the HTTP server in the server system.

TCP-related delays mainly include:

1, the TCP connection to establish a handshake;

2, TCP slow start congestion control;

3, the Nagle algorithm of data aggregation;

4. TCP Delay confirmation algorithm for the piggyback acknowledgement;

5, time_wait delay and port exhaustion.

For the above delay effect, the corresponding optimization methods are:

1, HTTP uses "persistent connection", HTTP 1.0 uses connection:keep-alive, HTTP 1.1 uses persistent connection by default;

2, adjust or prohibit the delay confirmation algorithm (the HTTP protocol has Shuangfeng characteristics of the request-response behavior reduces the possibility of a piggyback confirmation);

3, set tcp_nodelay prohibit Nagle algorithm, improve performance;

4. Time_wait is set to a smaller value.

  

Based on the above analysis, we can modify the TCP parameters of Linux, open/etc/sysctl.conf by VI, add the following parameters

net.ipv4.ip_local_port_range='1024x768 65000'net.ipv4.tcp_tw_reuse=' 1 ' net.ipv4.tcp_fin_timeout=' a '

which

Net.ipv4.ip_local_port_range

When the server receives many connections, the available ports of the system will be exhausted soon. By modifying the Net.ipv4.ip_local_port_range parameter, you can change the range of available ports to larger.

  

Net.ipv4.tcp_tw_reuse

When the server needs to switch between a large number of TCP connections, it generates a large number of connections in the TIME_WAIT state. Time_wait means that the connection itself is closed, but the resource has not yet been released. Setting Net_ipv4_tcp_tw_reuse to 1 allows the kernel to reclaim connections when it is secure, which is much cheaper than re-establishing a new connection.

Net.ipv4.tcp_fin_timeout

This is the minimum time that a connection in the TIME_WAIT state must wait before being recycled. Smaller it can speed up recycling.

If you are using an Nginx server, you can also set the following in the nginx.conf

http {    sendfile on           ;    Tcp_nodelay on        ;    Keepalive_timeout  ;    ... ...}

The first line of sendfile configuration can improve Nginx static resource hosting efficiency. Sendfile is a system call that completes the file send directly in the kernel space, does not need to read again write, there is no context switching overhead.

Tcp_nodelay is also a socket option that disables the Nagle algorithm when enabled and sends data as soon as possible, saving 200ms. Nginx will only be enabled for TCP connections in the keep-alive statetcp_nodelay。

The last line is used to specify how long the server can maintain for each TCP connection. The default value for Nginx is 75 seconds, and some browsers stay at most for 60 seconds, so I set the uniform to 60.

We can view TCP connection information by command ss-s

[Email protected] ~]# SS-stotal:31985(Kernel0) TCP:7032(estab -, closed6991, Orphaned4, SYNRECV0, timewait2336/0), ports0Transport Total IP IPv6*0-         -RAW0         0         0UDP2         2         0TCP A         the        2INET +         A        2FRAG0         0         0

All TCP/IP parameters are located in the/proc/sys/net directory (note that modifications to the contents of the/proc/sys/net directory are temporary and any modifications are lost after the system restarts), such as the following important parameters:

parameters (Path + files)

Describe

Default value

Optimized values

/proc/sys/net/core/rmem_default

The default TCP data Receive window size (in bytes).

229376

256960

/proc/sys/net/core/rmem_max

The largest TCP data Receive window (bytes).

131071

513920

/proc/sys/net/core/wmem_default

The default TCP data Send window Size (in bytes).

229376

256960

/proc/sys/net/core/wmem_max

The largest TCP data Send window (bytes).

131071

513920

/proc/sys/net/core/netdev_max_backlog

The maximum number of packets that are allowed to be sent to a queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes these packets.

1000

2000

/proc/sys/net/core/somaxconn

Defines the length of the maximum listening queue for each port in the system, which is a global parameter.

128

2048

/proc/sys/net/core/optmem_max

Represents the size of the maximum buffer allowed for each socket.

20480

81920

/proc/sys/net/ipv4/tcp_mem

Determine how the TCP stack should reflect memory usage, and each value is in a memory page (usually 4KB). The first value is the lower limit for memory usage, and the second value is the upper limit of the applied pressure that the memory pressure pattern begins to use for the buffer, and the third value is the upper limit for memory usage. At this level, messages can be discarded, thereby reducing the use of memory. For larger BDP, these values can be increased (note that the units are in memory pages and not bytes).

94011 125351 188022

131072 262144 524288

/proc/sys/net/ipv4/tcp_rmem

Defines the memory used by the socket for automatic tuning. The first value is the minimum number of bytes allocated for the socket receive buffer, the second value is the default value (the value is overwritten by Rmem_default), and the buffer can grow to this value if the system load is not heavy; the third value is the maximum number of bytes in the Receive buffer space (the value is Rmem_ Max overwrite).

4096 87380 4011232

8760 256960 4088000

/proc/sys/net/ipv4/tcp_wmem

Defines the memory used by the socket for automatic tuning. The first value is the minimum number of bytes allocated for the socket send buffer, the second value is the default value (the value is overwritten by Wmem_default), and the buffer can grow to this value if the system load is not heavy; the third value is the maximum number of bytes in the Send buffer space (this value is Wmem_ Max overwrite).

4096 16384 4011232

8760 256960 4088000

/proc/sys/net/ipv4/tcp_keepalive_time

The interval of time (in seconds) that TCP sends keepalive probe messages to confirm that the TCP connection is valid.

7200

1800

/proc/sys/net/ipv4/tcp_keepalive_intvl

When the probe message is not responding, the time interval (in seconds) for the message to be re-sent.

75

30

/proc/sys/net/ipv4/tcp_keepalive_probes

The maximum number of KeepAlive probe messages sent before the TCP connection is determined to fail.

9

3

/proc/sys/net/ipv4/tcp_sack

Enable selective answer (1 for enable), improve performance by selectively answering packets received by a random order, let the sender send only the missing segment, (for WAN communication) This option should be enabled, but will increase CPU usage.

1

1

/proc/sys/net/ipv4/tcp_fack

Enabling the forwarding answer enables selective response (SACK) to reduce congestion, which should also be enabled.

1

1

/proc/sys/net/ipv4/tcp_timestamps

The TCP timestamp (which increases by 12 bytes in the TCP header) enables the calculation of RTT to be enabled in a more precise way with a specific gravity timeout (refer to RFC 1323), and this option should be enabled for better performance.

1

1

/proc/sys/net/ipv4/tcp_window_scaling

enabling window scaling defined by RFC 1323, to support TCP windows exceeding 64KB, must be enabled (1 for Enable), and TCP Windows will take effect only when both sides of the 1GB,TCP connection are enabled.

1

1

/proc/sys/net/ipv4/tcp_syncookies

Indicates whether the TCP synchronization label (Syncookie) is turned on, the kernel must have the Config_syn_cookies key turned on to compile, and the synchronization label prevents a socket from overloading when there are too many attempts to connect.

1

1

/proc/sys/net/ipv4/tcp_tw_reuse

Indicates whether to allow sockets (time-wait ports) in the time-wait state to be used for new TCP connections.

0

1

/proc/sys/net/ipv4/tcp_tw_recycle

Time-wait sockets can be recycled more quickly.

0

1

/proc/sys/net/ipv4/tcp_fin_timeout

For the socket disconnected on this side, TCP remains in the Fin-wait-2 state for the time (in seconds). The other person may be disconnected or have not ended the connection or the unpredictable process has died.

60

30

/proc/sys/net/ipv4/ip_local_port_range

Represents the local port number that the TCP/UDP protocol allows to use

32768 61000

1024 65000

/proc/sys/net/ipv4/tcp_max_syn_backlog

The maximum number that can be saved in the queue for connection requests that have not yet been confirmed by the other. If the server is overloaded frequently, try increasing this number.

2048

2048

/proc/sys/net/ipv4/tcp_low_latency

This option should be disabled if the TCP/IP stack is allowed to accommodate low latency under high throughput conditions.

0

/proc/sys/net/ipv4/tcp_westwood

Enables the sender-side congestion control algorithm, which maintains the evaluation of throughput and attempts to optimize the overall utilization of bandwidth, which should be enabled for WAN traffic.

0

/proc/sys/net/ipv4/tcp_bic

Enabling binary increase congestion for fast, long-distance networks allows for better use of links that operate at GB speed, which should be enabled for WAN traffic.

1

--------------------------------------------------------------------------------

Finally, the previous period of time using the VPS at hand to build a Google agent, access speed is OK, share to everyone:

Google Guge, just call 119

Google: guge119.com Google academic: scholar.guge119.com

    

Brief introduction of the effect and optimization of TCP protocol on HTTP performance

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.