Computer Network Transport Layer

Source: Internet
Author: User

Computer network transport layer I. Introduction as mentioned earlier, transport layer is responsible for providing services for communication between processes in two hosts. It is the highest level for communication and the lowest layer for user functions. The transport layer provides multiplexing and demultiplexing functions, as well as packet error detection. Its two main protocols are connectionless User Datagram Protocol (UDP) and connection-oriented Transmission Control Protocol (TCP ). The transport layer uses the protocol port number to identify processes on the host. The port number is only meaningful locally and can be divided into two categories: the port number used by the server (also known as the port number and registered port number) and the port number used by the client (short port number ). II. The following table lists the features of UDP, tcp udp, and TCP. UDPTCP similarities ① reuse and reuse functions ② difference in error detection functions ① connectionless ① connection-oriented ② make the best effort to deliver ② reliable delivery ③ message-oriented (messages delivered to the upper layer are neither merged do not split) ③ byte stream-oriented ④ no congestion control ④ congestion control ⑤ support m-to-n interactive communication ⑤ only support point-to-point full duplex communication ⑥ low overhead (8 bytes) ⑥ The Header Format of UDP, which is relatively large (20 bytes), is as follows. Checks the entire datagram. The calculation method is the same as the IP datagram. Add a pseudo header to the test. The format of the header of the TCP packet segment is as follows. Each byte is numbered in sequence. The "Confirmation" is the sequence number of the next byte to be received. The "Data offset" is the header length, in the unit of 4 bytes; the test is similar to UDP. The length of the TCP segment header is 20 ~ 60 bytes. The "Emergency Pointer" and "Emergency URG" bits are used to send emergency messages (which is described as an extension and Discussion of knowledge ). The TCP urgency mechanism allows the sender to enable the receiver to receive some urgent messages, and the receiver to notify the user immediately after receiving the message. This mechanism adds a vertex (Emergency pointer) to the data stream, indicating that this is the end point of the emergency data. When the receiver wants to receive this vertex, it notifies the user of the emergency status, after receiving the data at this point, it will notify the user to enter the normal state. It seems that emergency data is not only sent first, but also received first. Please discuss this with me. We can find that both UDP and TCP contain the "pseudo Header" field. The pseudo-header is only used for computation and verification, and is neither delivered downstream nor submitted upstream. The original saying "the purpose is to let UDP check whether the data has arrived at the destination correctly" in the TCP/IP protocol ". For the first time, through the pseudo-header IP address check, UDP can confirm whether the datagram is sent to the local IP address; second, through the pseudo-header protocol field test, UDP can check whether the IP address has transmitted the datagram that should not be passed to UDP but should be passed to another high-level layer to UDP. At this point, the pseudo-header is actually very useful. Iii. TCP's "advanced functions" We know that TCP has many more functions than UDP, and makes it more powerful. This is not to say that UDP is worse than TCP, and they have different uses. For example, in scenarios where real-time video conferencing is more important than reliability, or where low overhead is required, it is still not UDP. TCP's "advanced functions" mainly include reliable transmission, traffic control, congestion control, and connection management. Next we will introduce them one by one. 3.1 reliable transmission ideal transmission conditions have the following two characteristics: the transmission channel does not produce errors; no matter how fast the sender sends data, the receiver always has time to process the received data. The simplest way is to stop the waiting protocol. It can be automatically retransmitted. Channel utilization U = T_D/(T_D + RTT + T_A) (strong, CSDN cannot stick the WORD formula ...... Here, the underline in the formula is the meaning of the lower mark ). T_D indicates the sending time of a single group, T_A indicates the sending time of the group, and RTT indicates the round-trip time. The disadvantage of the Stop wait protocol is that the channel utilization is too low. Therefore, pipeline transmission is used to send multiple groups at a time. The continuous ARQ protocol and sliding window protocol can be used. Sliding Window Protocol is the most important protocol for reliable transmission. One of the most complex problems is the selection of timeout retransmission time. The timeout Retransmission Time RTO (Retransmission Time-Out) should be slightly greater than RTT. However, considering that the network is dynamic and the RTT time is changing, we need to dynamically calculate two parameters-weighted average round-trip time and weighted average deviation-to dynamically obtain the RTO value: RTO = RTT_S + 4 × RTT _ D. RTT_S = (1-α) × (old RTT_S) + α × (New RTT sample) RTT_D = (1-β) × (old RTT_D) + β × | RTT_S-New RTT sample | generally, α = 1/8, β = 1/4. 3.2 flow control is used to prevent the sender from sending too fast and allow the receiver to receive the message in time. The sender dynamically controls the sender's sending window by feeding the rwnd value of the receiving window to the sender. Congestion control is to prevent excessive data from being injected into the network, so that the routers or links in the network are not overloaded. The difference between the two is that traffic control is an end-to-end problem. Receiver reception is taken into account. Congestion Control is a global problem and network smoothness is considered. The four congestion control algorithms are very important mechanisms: slow start, congestion avoidance, fast retransmit, and fastrecovery ). The algorithm is ambiguous, so I will summarize it here. If you are not familiar with algorithms, you 'd better read the teaching materials. (1) set the congestion window cwnd = Sending window = 1 MSS (maximum packet segment) at the beginning of the slow start ). After each transmission round, the cwnd and the sending window are doubled until cwnd> ssthresh (slow start threshold ). (2) congestion prevention after cwnd> ssthresh, after each transmission round, cwnd will add 1 MSS (addition increases) instead of doubling. Until the network is congested (timeout ). After the congestion occurs, set ssthresh to half of the cwnd (the multiplication is reduced), set cwnd to 1, and run the slow start algorithm. (3) After receiving the out-of-order packet segment, the fast retransmission Receiver immediately sends a duplicate validation request, and each time the receiver receives a packet segment, it sends a duplicate validation request. As long as the sender receives three repeated confirmations, it should immediately re-transmit the packets that the other party has not yet received. (4) When the fast recovery sender continuously receives three repeated confirmations, execute the multiplication and reduction algorithm, set ssthresh to half of cwnd, and then (do not start slowly) set cwnd to the value of ssthresh and execute the addition increase algorithm. In reality, the sending window is not always equal to cwnd, because it is also limited by the Receiving Window rwnd. TCP also uses Random Early Detection RED (Random Early Detection) to randomly discard individual groups with probability p when the average queue length of the router is between the maximum threshold and the minimum threshold, to avoid global synchronization (multiple TCP connections are congested at the same time and are almost restored at the same time ). P is not a constant. 3.3 connection management is relatively simple. The TCP connection creation process is a three-way handshake: the client requests and sets the initial sequence number sent by the client, the server confirms the request, and sets the initial sequence number sent by the server, and the customer confirms again. Each of the first two times consumes a sequence number, and the third time, if the packet segment does not carry data, no sequence number is consumed. (I just forgot a question in the final exam.) The Connection release is relatively complicated and goes through two secondary handshakes. The first handshake (initiated by the client) makes the TCP connection semi-closed. The client cannot send data, but the server can also; the second handshake (initiated by the server) ends the connection.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.