TCP working principle, TCP traffic control principle, sliding window, congestion window, Ack cumulative confirmation, etc.

Source: Internet
Author: User
Document directory
  • 1. Establish a connection
  • 2. End the connection.
  • 3. Maximum packet length
  • 4. TCP status migration Diagram
  • 5. rst. both open and close
  • 6. TCP Server Design
  • 1. Interactive Data Stream of TCP
  • 2. TCP block data streams

From http://hi.baidu.com/gookings/blog/item/6c21292a0fe7103c5243c155.html

 

 

TCP and UDP are on the same layer-transport layer, but the most difference between TCP and UDP is that TCP provides a reliable data transmission service, which is connection-oriented, that is, the two hosts that use TCP communication first need to go through a "call" process, wait until the communication preparation is complete before data transmission and end the call. Therefore, TCP is much more reliable than UDP. UDP sends data directly, and no matter whether the recipient is receiving the message, even if UDP cannot be delivered, it will not generate ICMP error packets, this was reiterated many times.

Extract the simple working principle of TCP to ensure reliability as follows:

  • Application Data is divided into data blocks that TCP considers to be the most suitable for sending. This is completely different from UDP, and the length of the datagram generated by the application will remain unchanged. The unit of information transmitted to an IP address by TCP is a segment or segment (segment) (see Figure 1-7 ). In section 1 8.4, we will see how TCP determines the length of the packet segment.
  • When TCP sends a segment, it starts a timer and waits for the destination to confirm receiving the segment. If a confirmation message cannot be received in time, the message segment will be resold. In chapter 2, we will understand the self-adaptive timeout and retransmission policies of TCP.
  • When TCP receives data from the other end of the TCP connection, it sends a confirmation message. This confirmation is not sent immediately and will usually be postponed by a few minutes, which will be discussed in section 1 9.3.
  • TCP will maintain its header and data validation. This is an end-to-end test to detect any changes in data during transmission. If verification and error are received, t p discards the packet segment and does not confirm receipt of the packet segment (meaning that the initiator times out and resends the packet segment ).
  • Since the TCP packet segment is transmitted as an IP datagram, the arrival of the IP datagram may be out of order, so the arrival of the TCP packet segment may also be out of order. If necessary, TCP sorts the received data again and delivers the received data to the application layer in the correct order.
  • TCP can also provide traffic control. Each side of a TCP connection has a fixed buffer space. The TCP receiving end only allows the other end to send data that can be accepted by the receiving end buffer. This will prevent the buffer overflow of the slow host caused by the fast host.

From this section, we can see that the way to maintain reliability in TCP is timeout and re-transmission, which makes sense, although TCP can also use a variety of ICMP packets to process these, however, this is not reliable,The most reliable way is to resend the datagram as long as it is not confirmed until it is confirmed by the other party.

The same as the UDP header, the TCP Header has the sending port number and the receiving port number. However, the TCP header information is obviously more than that of UDP. As you can see, TCP provides all the necessary information required for sending and confirming. This is detailed in the P171-173. It can be imagined that the process of sending TCP data should be as follows.

  • Establish connection between the two parties
  • The sender sends the TCP datagram to the receiver and waits for the Peer to confirm the TCP datagram. If no, the sender resends the datagram. If yes, the sender sends the next datagram.
  • The receiver waits for the sender's datagram. If the received datagram is correct and verified, the receiver sends an ACK (confirmed) datagram and waits for the next TCP datagram to arrive. Wait until fin is received (send complete datagram)
  • Abort connection

To establish a TCP connection, the system may establish a new process (the worst is also a thread) for data transmission.

 

TCP is a connection-oriented protocol. Therefore, a connection must be established before both parties send data. This is totally different from the preceding protocol. All the Protocols mentioned above only send data. Most of them do not care whether the sent data is sent or not, especially UDP. From a programming perspective, UDP programming is also much simpler-udp does not need to consider data sharding.

In this document, telnet login and exit are used to explain the process of establishing and suspending a TCP connection. We can see that the establishment of a TCP connection can be simply calledThree-way handshakeAnd the disconnection can be calledFour handshakes.

1. Establish a connection

When establishing a connection, the client first requests to the server to open a port (using a TCP packet with SYN segment equal to 1), and then the server sends back an ACK packet to notify the client to receive the request message, after receiving the confirmation message, the client sends a confirmation message again to confirm the confirmation message (bypass) sent by the server. At this point, the connection is established. This is called a three-way handshake. If you want to prepare both parties, you must send three packets, and only three packets are required.

We can imagine that if TCP's timeout retransmission mechanism is added, TCP can completely ensure that a data packet is sent to the destination.

2. End the connection.

TCP has a special concept calledHalf-closeThis concept means that the TCP connection is a full-duplex (both sending and receiving can be done at the same time) connection. Therefore, when closing the connection, you must close the connection between the transmission and sending directions. The client sends a TCP packet whose fin is 1 to the server, and then the server returns an ACK packet to the client, and sends a FIN packet. When the client replies the ACK packet (four handshakes ), the connection is over.

3. Maximum packet length

When establishing a connection, both parties must confirm the maximum message length (MSS) of each other to facilitate communication. Generally, the SYN length is MTU minus the fixed IP header and TCP Header Length. For an Ethernet, it can generally reach 1460 bytes. Of course, for non-local IP addresses, the MSS may only have 536 bytes, and the value will be smaller if the mss of the intermediate transmission network is smaller.

4. TCP status migration Diagram

The p182 page of the book provides the TCP status chart, which looks complicated because it contains two parts: Server Status migration and client status migration, from a certain point of view, this figure will be much clearer. The servers and clients here are not absolute. The clients that send data are the clients, and the servers that receive data are the servers.

4. Status migration diagram of client applications

The client status can be expressed in the following process:

Closed-> syn_sent-> established-> fin_wait_1-> fin_wait_2-> time_wait-> closed

The above process is a proper process in the normal circumstances of the program. From the figure in the book, we can see that when a connection is established, when the client receives the ACK of the SYN packet, the client opens an interactive data connection. The client ends the connection actively. After the client ends the application, it needs to go through the fin_wait_1, fin_wait_2, and other statuses. The migration of these statuses is the four handshakes mentioned above to end the connection.

4. Server Status migration Diagram

The server status can be expressed in the following process:

Closed-> listen-> SYN received-> established-> close_wait-> last_ack-> closed

When a connection is established, the server enters the data interaction status only after the third handshake, while closing the connection is after the second handshake (note not the fourth ). After the feature is disabled, you must wait for the client to provide the final ack package before entering the initial state.

4. Migration in other statuses

The diagram in the book also contains some other State migration, which summarizes the two aspects of server and client as follows:

 

  1. Listen-> syn_sent, the explanation is very simple, and the server sometimes needs to open the connection.
  2. Syn_sent-> SYN received. If the server and client receive SYN datagram in the syn_sent state, both the server and client need to send the syn ack datagram and adjust their status to the SYN receiving state to be in the established State.
  3. Syn_sent-> closed: when sending times out, it will return to the closed status.
  4. SYN _ received-> listen. If an RST packet is received, it returns to the listen status.
  5. SYN _ received-> fin_wait_1. This migration means that you can directly jump to the fin_wait_1 status and wait to close without going to the established status.
4.4.2msl waiting status

In the figure given in the book, there is a time_wait wait state, which is also called the 2msl state. It means that after time_wait2 sends the last ack datagram, it will enter the time_wait state, this status prevents the datagram of the last handshake from being transmitted to the other party and prepared (note that this is not the four handshakes, but the fourth handshake is the insurance status ). This State ensures that both parties can end normally, but the problem also arises.

Because of the 2msl status of the plug-in port (the plug-in Port indicates the IP address and port pair, socket), the application cannot use the same plug-in again in 2msl time, which is better for the customer program, however, for a service program, such as httpd, it always needs to use the same port for service. In 2msl time, an error occurs when httpd is started (the plug is used ). To avoid this error, the server provides a concept of Calm time. This means that although the server can be restarted within 2msl, the server still needsCalmWait 2msl time before the next connection can be made.

4.5.fin _ wait_2 status

This is the famous semi-closed status, which is the status after the client and server shake hands twice when the connection is closed. In this state, the application can accept data, but data cannot be sent. However, the client is always in the fin_wait_2 state, and the server is always in the wait_close state, the Application Layer determines to close this state.

5. rst. both open and close

RST is another way to close the connection. The application should be able to determine the authenticity of the RST package, that is, whether the connection is aborted abnormally. Both open and close are two special TCP states, with a low probability of occurrence.

6. TCP Server Design

We have previously talked about UDP server design. We can find that UDP servers do not need the so-called concurrency mechanism at all, and they only need to establish a data input queue. But TCP is different. The TCP server needs to establish an independent process (or lightweight, thread) for each connection to ensure the independence of the conversation. Therefore, the TCP server is concurrent. In addition, TCP also requires an incoming connection request queue (which is not required by the UDP server) to establish a dialog process for each connection request, this is why all TCP servers have a maximum number of connections. Based on the IP address and port number of the source host, the server can easily differentiate different sessions for data distribution.

Understanding the status migration diagram in this chapter is the key to learning this chapter.

 

 

Currently, many TCP-based network protocols are available, including telnet, ssh, FTP, and HTTP. These protocols can be roughly divided into two categories based on data throughput: (1) interactive data types, such as telnet and SSH. In most cases, these protocols only exchange small traffic data, for example, press the keyboard to display some text. (2) Data is block type, such as ftp. This type of Protocol requires that TCP can carry data as much as possible, maximize the data throughput, and improve the efficiency as much as possible. For these two cases, TCP provides two different policies for data transmission.

1. Interactive Data Stream of TCP

For highly interactive applications, TCP provides two policies to improve transmission efficiency and reduce network burden: (1) routing with Ack. (2) Nagle algorithm (send as much data as possible at a time ). Generally, when the network speed is very fast, for example, using the lo interface for telnet communication, when the primary key is pressed and the ECHO is required, the client and server will go through the process of sending key data-> ack for sending key data on the server-> ack for sending echo data on the client, among them, the data traffic will be 40bit + 41bit + 41bit + 40bit = 162bit. If it is in the WAN, the TCP traffic of such small groups will cause a great network burden.

1. Sending method with ack in producer

This policy means that after the host receives the TCP datagram from the remote host, it usually does not immediately send the ACK datagram, but waits for a short time, if there is a TCP datagram sent from the host to the remote host during this time, the ACK datagram "rst" will be sent out, and the original two TCP datagram will be integrated into one. Generally, this time is 200 ms. We can see that this policy can significantly improve the utilization of TCP datagram.

1.2.nagle Algorithm

People who have been on BBS should have feelings, that is, posting when the network is slow. Sometimes, after typing a string, after a period of time, the client suddenly shows a lot of content as if it were "crazy, it's like the data passes through all the time. This is the role of the Nagle algorithm.

The Nagle algorithm indicates that when host a sends a TCP datagram to host B and enters the status of ack datagram waiting for host B, only one TCP datagram is allowed in the TCP output buffer, in addition, this datagram constantly collects subsequent data and integrates it into a large data packet. When the ACK packet of host B arrives, the data will be sent "in a brain. Although such a description is inaccurate, it is still an image and easy to understand. We can also appreciate the benefits of this policy to reduce the network burden.

When writing a plug-in program, you can disable this algorithm through tcp_nodelay. In addition, this algorithm is used to check the situation. For example, if the X Window Protocol Based on TCP still uses this algorithm when processing mouse events, the "latency" may be very high.

2. TCP block data streams

For FTP, which has high requirements on data throughput, we always want to send as much data as possible to the other host, even if it is a little "delay. TCP also provides a set of policies to support such requirements. In TCP, 16 bits indicate the size of the window, which is the core of these policies.

2. 1. Ack problems during Data Transmission

Before interpreting the sliding window, you need to check the ACK response policy. Generally, if the sender sends a TCP datagram, the receiver should send an ACK datagram. But in fact, this is not the case. The sender will continuously send data to fill the receiver's buffer zone as much as possible, and the receiver only needs to send an ACK message to respond to the data, this is the cumulative feature of ack, which greatly reduces the burden on the sender and receiver.

2. Sliding Window

A sliding window essentially describes the data of the receiver's TCP datagram buffer size. Based on this data, the sender calculates the maximum amount of data that can be sent. If the sender receives a TCP datagram whose window size is 0, the sender stops sending data and waits until the sender sends a datagram whose window size is not 0. P211 and p212 explain this point well.

For the sliding window protocol, the book also introduces three terms:

  1. Window combination: when the window is close from the left to the right, this happens when the data is sent and confirmed.
  2. Window Opening: when the right side of the window moves to the right, this phenomenon occurs after the receiver processes the data.
  3. Window contraction: this phenomenon does not often occur when the right side of the window moves to the left.

TCP uses this window to slowly move from the left of the data to the right, and send the data in the window range (but do not send all, only data in the window can be sent .). This is the meaning of the window. Figure 20-6 illustrates this. The window size can be determined through socket. 4096 is not the ideal window size, while 16384 can greatly increase the throughput.

2. 3. Data congestion

The above policy can be used for intra-lan transmission, but problems may occur in wide-area networks. The biggest problem is that there is a bottleneck during transmission (for example, it must go through a slip low-speed link) to solve this problem, the TCP sender needs to confirm the maximum data throughput of the lines connecting both parties. This is the so-called congestion window.

The principle of the congestion window is very simple. The TCP sender first sends a datagram and then waits for the response from the other party. After receiving a response, the sender doubles the size of the window and then sends two consecutive datagram packets, wait until the other party responds, and then double the window (first the exponential double of 2, to a certain extent, it will become the current growth, this is the so-called slow start), send more data packets, after a timeout error occurs, the sender learns the line bearing capacity of both parties, determines the size of the congestion window, and the sender sends data using the size of the congestion window. It is very easy to observe this phenomenon. Generally, when downloading data, the speed is slowly "Rushed up"

The above is the general process of TCP data transmission. Although not detailed, it is sufficient to describe the working principle of TCP. The focus is on the traffic control principle of TCP, sliding window, and congestion window, ack accumulation confirmation and other knowledge points.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.