Transport Layer Knowledge Summary Transport Layer Overview: Why divide the transport layer?
Now that the network layer has been able to transfer the data sent from the source host to the destination host, why do we need to add a transport layer? This requires us to understand the host user application layer communication, the real data communication body located in two network hosts is not the two hosts, It is a variety of network application processes in both hosts. At the same time, there may be multiple processes running simultaneously on a host, and the application needs to be provided with an identity, which is the port. The transport layer exists to provide this end-to-end service. The following is explained in a graph.
It can also be seen that the IP protocol provides logical communication between hosts. The Transport Layer protocol provides logical communication between processes.
What is end-to-end? What's the difference between a point-to-point?
A "point-to-point" connection is a connection made directly through a cable between two communicating parties, without any other equipment.
An "end-to-end" connection is a connection between two terminal hosts, which is connected to a number of devices (routers) in connection with the two terminal systems.
Two important terms for the transport layer: TSAP and TPDU
Tsap (Transport-Layer service access point) is a logical interface of the upper layer (the application layer) to invoke the lower layer (transport level), which is actually what we call the port, the process that the port uses to identify the application layer.
Port:
The port is represented by a 16-bit binary, so there are 65,535 port numbers.
The 0~1023 port is typically assigned to some network protocols or applications that are common on the market, and this type of port number is accepted by the majority of users, and in fact becomes a standard, called a reserved port.
The rest is the general port, which can be used on its own.
TPDU (Transport Layer Protocol Data Unit) refers to the transmission layer and peer layer transmission between the message, that is, "data segment", in fact, each layer has each layer of SAP and PDU.
Services provided by the Transport layer:
-
- Establishment of logical connections
- Transport Layer Addressing
- Data transmission
- Transport Connection Release
- Flow control
- Congestion control
- Multiplexing and multiplexing
- Crash recovery
TCP (Transport Control) Protocol TCP protocol features:
Connection-oriented transport protocol : The connection must be established before data transfer, and the connection must be released after the data transfer is complete.
only unicast transports are supported : Each transport connection can have only two endpoints, point-to-point connections only, multicast and broadcast transmission are not supported, and UDP is supported.
Deliver Reliable delivery services : No errors, no loss, no duplication, and the order is consistent with the source data
The transmission unit is the data segment : Each data segment sent is not fixed, and is affected by the size of the application layer transmitting message and the MTU (maximum transmission Unit) value in the network. The minimum data segment may have only 21 bytes (where 20 bytes belong to the TCP header and the data portion is only 1 bytes).
supports full duplex transmission : Both sides of the communication can send data and receive data at the same time.
TCP connections are byte -stream based: UDP is based on message flow.
TCP Datagram Format:
The specific description of the fields, you can see this blog post, a lot of content, do not want to write. http://www.360doc.com/content/12/1218/10/3405077_254718387.shtml
TCP socket (SOCKET):
The socket is called in the TCP/IP protocol, which is similar to the previously mentioned TSAP address, the Transport Layer protocol interface, in order to distinguish between different application processes and connections, the general computer operating system provides the socket interface for the application to interact with TCP/IP.
Note: The biggest difference between sockets and TSAP is that the TSAP is located in the transport layer, and the socket is at the application layer, but it calls the port of the transport layer.
On the application layer, there is a socket for each application process that invokes a specific port of the transport layer, and the socket is a many-to-one relationship with the port and IP address.
Establishment of a TCP transport connection (3-time handshake):
Release of the TCP Transport connection (4-time handshake):
How TCP guarantees Data reliability:
TCP is a Transport layer protocol that can guarantee reliable data transmission, mainly using the following four mechanisms to achieve the transmission of the reliability of the information:
-
- byte numbering mechanism : TCP data segments are numbered one by one in bytes in the data section of the data segment, ensuring that each byte of data can be delivered and received in an orderly manner.
- Data Segment Confirmation mechanism : Each receiving data segment must have a receiving end to the sender to return the acknowledgment data segment, where the confirmation number indicates that the data segment number has been correctly received.
- time-out retransmission mechanism : there is a retransmission timer (RTT) in TCP, send a data segment and also open the timer, if the timer expires when no return confirmation, then the timer stopped, retransmission of the data.
- The selective acknowledgement mechanism :(selective ack,sack)/retransmission only missing portions of the data, and does not retransmit data that has been properly received.
Traffic Control for TCP:
The reason for the flow control is that the data is sent too fast and the receiving end is too late to receive the packet loss. The purpose of flow control is not to let the sending side of the data sent is larger than the data processing capacity of the receiving end.
TCP traffic is controlled by a sliding window mechanism, and the size of the window is in bytes.
In the TCP header has a window field (see TCP header), the value of this field is set to the other side of the sending window, the sending window at the time of the connection is agreed by both parties, but in the communication process, the receiver can be based on their own resource status, dynamic adjustment of the other side of the value of the window to achieve the purpose of control
Assuming that the size of each field is 100 bytes, the current send window size is 400 bytes, the sender has sent 400 bytes of data, but only the first 200 bytes of confirmation information, and 200 bytes are not sent to confirm, then the sending window is now able to send 300 bytes, so the sending window moved forward, The whole process. is a simple sliding window case.
Congestion Control for TCP:
What is network congestion? I only send pictures, not talk.
Two scenarios: slow start and congestion avoidance
Congestion window: The window that is set up to avoid congestion, and ultimately the number of bytes sent is the minimum value of the Send window and congestion window set by the receiving end for the send side.
Slow Boot threshold (Ssthresh): The initial value is 64k, which is 65,535 bytes, and when a data loss occurs, its value becomes half the size of the congestion window.
Slow start:
The size of the congested window is set to an MSS (the maximum data segment size currently used on the connection) when the host first starts sending the segment.
After each receipt of a message segment confirmation, the Congestion window will increase the size of up to one MSS.
And so on, using this method to gradually increase the size of the congestion window on the sending side, so that the rate of packet injection to the network more reasonable.
Congestion avoidance works until the value of the congestion window reaches the slow start threshold.
Congestion avoidance:
Instead of increasing the size of the congested window at exponential speed as a slow start, the scheme increases linearly by the slow start threshold and is not prone to congestion on the network.
The above two solutions can effectively reduce the impact of network congestion, but not completely avoid congestion, and then put forward the "rapid retransmission" and "rapid recovery" mechanism, the basic idea is:
When the receiving side receives a data segment that is not reached in 3 order, sends a repeating ACK data segment quickly, and then repeats 3 duplicate ACK data segments, that is, the field on the corresponding "confirmation number" is lost, the TCP unequal retransmission timer fails, and the lost data is re-transmitted, which is a fast retransmission, At the same time, the current congestion window size is set to half of the current slow-start threshold size to reduce the network load and then perform "congestion avoidance".
UDP protocol:
In some video through, the network phone, lost part of the data, the impact is not big, this time using the UDP protocol transmission is a better choice.
Features of the UDP protocol:
-
- No connectivity
- Not reliable
- With the message as the boundary
- No flow control and congestion control scheme
- Support but multicast, multicast, broadcast and other means of communication
The other ... There's really nothing to say ...
Category: Learning Notes
Transport Layer Knowledge Summary