A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
TCP and UDP Use the same network layer (IP), TCP provides a connection-oriented, reliable byte throttling service.
Connection-oriented means that only two parties communicate with each other. Broadcast and multicast cannot be used for TCP.
The reliability of TCP is mainly shown in the following aspects:
(1) Segmentation: The application data is segmented into a block of data that TCP considers most appropriate to send.
(2) Timeout retransmission: After sending out a paragraph will start a timer, waiting for the destination to confirm receipt of this message segment. Do not receive confirmation in time will be retransmission.
(3) The acknowledgment of the receiving end is not sent immediately, but is delayed.
(4) TCP maintains its first and the data of the test and. If the receiving end receives the inspection and the error will be discarded.
(5) TCP segment arrival may be disorderly. TCP will reorder the data that is received.
(6) Data duplication may occur in IP data, so TCP must discard duplicate data on the receiving end.
(7) TCP also provides flow control, each segment has a fixed buffer size. TCP allows only the size of the sending buffer to be acceptable. TCP Header
Encapsulation format in an IP datagram
TCP Header Format
The source port number and destination port number plus the source IP address and destination IP address in the IP header uniquely determine the two sides of each TCP connection in the Internet.
The ordinal identifies the byte stream of data that is sent from the TCP sender to the TCP receiver, representing the first bytes of data in the message segment.
6 Flags bits in TCP
Flow control is provided through a 16-bit window size. So the maximum is 65,535 bytes.
The emergency pointer only works when the Urg is 1.
The MSS is the most common optional field that indicates the maximum length of the message segment that this paragraph can receive. The establishment and termination of TCP connections
TCP state transition diagram
TCP Normal connection and terminate the corresponding state
2MSL Wait Status
The TIME_WAIT state is also known as the 2MSL wait state. Each specific TCP implementation must select a segment maximum lifetime MSL. It is the longest time any segment of a message has been dropped in the network. When TCP performs an active shutdown and plays the last ACK, the connection must remain twice times the MSL in the TIME_WAIT state. This allows TCP to send the last ACK again to prevent the ACK from being lost.
The result of this 2MSL wait is that the interface cannot be used during this 2MSL wait time, and any late message segments will be discarded. fin_wait_2
On the fin_wait_2 state surface we have already emitted the fin, and the other end confirms it. RST Reset message segment
TCP sends a reset message segment whenever a message segment sends an error to the Datum connection. (The connection of the datum refers to the interface between the sock ends) connection request to a nonexistent port
The number of the reset message segment is set to 1, confirming that the serial number is entered into isn plus data bytes. exception terminating a connection
(1) Discard any outgoing data and send the reset message segment immediately
(2) The receiver of RST will distinguish between an abnormal shutdown or a normal shutdown. detect half Open connection
If one side is closed or an exception terminates the connection, the other side does not know that it is partially open. Open at the same time
At the same time, open the need to Exchange 4 message segments than the normal three handshake one more, and note that there is no end of the call server or customer. close at the same time
Each option starts with the type of 1-byte kind field description option. Len Field Description Total length TCP port
TCP uses a 4-tuple of local and remote addresses: The destination I p address, the destination port number, the source IP address, and the source port number to process incoming multiple connection requests. TCP cannot determine that the process has received a connection request through only the destination port number. In addition, in three processes using port 23, only Lisien processes can receive new connection requests. The established process will not receive the SYN segment, and the listen process will not be able to receive the data packets. incoming connection request queue
A concurrent server invokes a new process to process each client request, so a server that is on a passive connection request should always be prepared to process the next incoming connection request. When multiple connection requests are reached, the TCP implementation uses the following rules:
(1) There is a fixed-length queue waiting for the connection request, and the connection to the queue has been received by TCP (three handshake completed) but not received by the application layer.
(2) The application layer indicates the maximum length of the column, which is called the backlog value. (0~5)
(3) When a connection request is reached, TCP uses an algorithm that determines whether to receive the connection based on the number of connections to the current connection queue. The backlog value describes the maximum number of connections that TCP listens to the endpoint that has been received by TCP and waits for the application tier to receive it.
(4) If for a new connection request, the connection queue for the endpoint that the TCP listens to is still in space.
(5) TCP will not entertain SYN if there is no space for a new connection. The client's active opening will eventually timeout. Interactive Data flow of TCP Interactive Input
to undergo the confirmation of delay
Typically, TCP does not send an ACK immediately when it receives data, and instead it defers sending to send an ACK along with the data that needs to be sent along with the drop (data-back ACK). Nagle algorithm
On a wide area network, these small groupings increase the likelihood of congestion.
The algorithm requires that a TCP connection can have at most one unacknowledged, unfinished small group, and no other small groupings can be sent before the confirmation of the group arrives. TCP as a block data stream
The sliding window protocol is a kind of flow control method. This protocol allows the sender to stop and wait for confirmation before sending multiple groupings. sliding window
Description of Window Edge motion
(1) The left edge of the window to the right edge close to the window closure. Occurs when the data is sent and confirmed.
(2) When the right edge of the window is moved to allow more data to be sent, we call it open window. The receiving process that occurred at the other end has read the confirmed data and freed the TCP receive cache.
(3) When the right edge moves to the left, we call it a window shrink.
When the customer sends a command to the server, it sets the push flag and stops to wait for the server to respond. By allowing the client application to notify its TCP settings push flag, the client process notifies TCP that sending a message segment to the server does not cause the submitted data to be stranded in the cache because of waiting for additional data. When a server TCP receives a message segment that has a push flag set, it needs to submit the data to the server process immediately without waiting to see if additional data arrives.
Sender: When TCP sends a data packet onto the push flag, it means that the data should be sent immediately, rather than waiting for additional data.
Receiving end: When the receiving end receives data with a push flag, it should accept that all data received in the cache (including packets currently with a push mark) should be submitted to the application tier immediately.
TCP depends on whether the packet currently being sent is the last unsent packet in the send cache, and if so, the packet will be flagged for push, so we often find that packets with a push sign are generally present at the beginning or end of the transmission of the file, especially the FIN packet slow start
TCP supports an algorithm called "slow Start". The algorithm works by observing that the rate at which the new packet enters the network should be the same as the speed at which the other end returns confirmation.
Slow start adds a window to the sender's TCP: The Congestion window, recorded as CWnd. When a TCP connection is established with a host of another network, the congestion window is initialized to 1 message segments. Each time an ACK is received, a packet is added to the Congestion window. The sender takes the minimum value of the congestion window and the notification window as the sending limit.
Congestion is the traffic control sent by the sender, while the notification window is the receiver's traffic control. Throughput
Typically, the time that a packet is sent depends on two factors: propagation delay (due to the limited speed of the light, the latency of the transmission device), and a transmission delay depending on the media rate (the number of bits per second that the media can transmit). For a given path between two nodes, the propagation delay is generally fixed, and the sending delay depends on the size of the packet. The transmission delay plays a major role in the slow rate, while the propagation delay is the main position at the gigabit rate.
No matter how many segments fill this pipe, the return path always has the same number of a C K. This is the ideal stable state of the connection. Bandwidth Delay Product
Usually to describe the capacity of the channel
Capacity (bit) = bandwidth (b/s) Xround-trip time (s)
The product of bandwidth and RTT. TCP timeout and retransmission
TCP solves this problem by setting up a timer when it is sent, and then retransmission the data if it is not acknowledged when the timer is received. The key is the policy on timeout and retransmission, how to determine the timeout interval, and how to determine the retransmission frequency.
For each connection, TCP manages 4 different timers.
(1) Retransmission timer is used and when desired to receive confirmation from the other end.
(2) stick to the timer to keep the window size information constantly flowing, in time to close the other end of its receive window.
(3) The live timer detects how the other end of a space connection crashes or restarts.
(4) The 2MSL timer measures the time when a connection is in a time_wait state. Measurement of RTT
Initially, a low-pass filter was used to update a smoothed RTT estimator. Alpha here is a smoothing factor with a recommended value of 0.9. R is the previous estimate, m taken from the new measurement.
The time beta recommended in RFC 793 now is a delay-discrete factor with a recommended value of 2. Karn Algorithm
When the timeout occurs, RTO is in retreat, the group is retransmission with a longer RTO, and then a confirmation is received. So the ACK is for the first group or the second group. There is a retransmission ambiguity.
stipulates that when a timeout and retransmission occurs, the RTT cannot be updated until the confirmation of retransmission data arrives, because it does not know which transmission the ACK corresponds to. Congestion avoidance Algorithm
The algorithm assumes that the loss caused by packet receipt is very small, so packet loss is considered to be a congestion on a network somewhere between the source host and the destination host.
In practice, these two algorithms are usually implemented together.
Congestion avoidance algorithms and slow-start algorithms need to maintain two variables for each connection: a congestion window CWnd and a slow-start threshold ssthresh. The resulting algorithm works as follows:
(1) For a given connection, initialize the CWnd to 1 message segments and Ssthresh to 65,535 bytes.
(2) The output of the TCP output routine cannot exceed the size of the CWnd and the receiver notification window. Congestion avoidance is the traffic control used by the sender, while the notification window is the flow control carried out by the receiving party.
(3) When congestion occurs, Ssthresh is set to half of the current window (CWnd and Receiver notification window size minimum, at least 2 message segments). If the timeout causes congestion, the CWnd is set to 1 message segments. (This is the slow start)
(4) When the new data is confirmed by the other side, increase the CWnd, but the increased method depends on whether we are doing slow start or congestion avoidance. If the CWnd is less than or equal to Ssthresh, a slow start is underway or congestion avoidance is in progress. The slow start is always in the order we go back to the half time when the congestion is sent and then to the execution congestion avoidance.
Slow Start algorithm CWnd window for exponential growth. The congestion avoidance algorithm increases the CWnd by up to 1 message segments in the round-trip time.
fast retransmission and fast recovery algorithm
(1) Sets the Ssthresh to be half the current congestion window CWnd when a third duplicate ACK is received. Retransmission of missing message segments. Set CWnd to Ssthresh plus 3 times times the message segment size.
2) Each time a duplicate ACK is received, CWnd increases the size of 1 segments and sends 1 groupings (if the new CWnd allows to be sent).
3) Set CWnd to Ssthresh (the value set in step 1th) when the next ACK to confirm new data arrives. This ACK should be a confirmation of the retransmission in step 1 during a round-trip time after the retransmission. Additionally, this ACK should be a confirmation of all intermediate segments between the lost packet and the 1th repeated ACK received. This step uses congestion avoidance because we halve the current rate when the packet is lost.
Start building with 50+ products and up to 12 months usage for Elastic Compute Service