UNP (a): TCP, UDP protocol in the network programming angle

Source: Internet
Author: User

This blog post is a reading note after learning UNP (UNIX Network programming), for you to review your knowledge later.

  1. TCP, UDP overview
    In front of the "Computer Network and TCP/IP" section has introduced a number of TCP, UDP related knowledge TCP/IP (iii): Transport layer TCP and UDP, here is simply a UNIX network programming from the perspective of the TCP, UDP protocol.

    We all know that UDP lacks reliable , non-connected , datagram- oriented protocols, and if you want to ensure that datagrams arrive at their destination, you must implement some features in the application layer yourself: the determination of the end, the timeout and retransmission of the local side, etc. UDP message-oriented characteristics, so that UDP is not like TCP can be set MSS (maximum section size) to avoid IP layer shards, UDP does not have the appropriate measures to avoid the IP layer in the Shard, so in the use of UDP, should control the size of the transmission datagram, avoid fragmentation, but the datagram is too small, low utilization , should be reasonable planning.

    In contrast, TCP provides reliable transport services, traffic control, byte-stream, connection-oriented protocols , through time-out retransmission, confirmation and other means to achieve reliable transport services, TCP contains a dynamic estimation of customer and server round-trip time (round-trip Time RTT), knowing how long it will take to wait for confirmation. TCP sorts each byte in the sent data (serial number), for example, an application writes 2048 bytes to a TCP socket, causes TCP to send 2 sub-sections, a sub-section sends data with serial number 1~1024, and a sub-section sends data with serial number 1025~2048. The receiver will sequence the serial number of the data (each sub-section may arrive at the destination in a non-sequential way), ensure the correctness of the data, and if the sub-section in the process of transmission is lost, the sender should time-out retransmission, for the repeating section, the receiver has the ability to discard duplicate sub-sections. TCP always tells the peer how many bytes of data it can receive at a time from the peer at any time, which is called the advertisement window , which indicates the amount of space currently available in the receive buffer, thus guaranteeing that the data sent by the sending side does not overflow the receive buffer. The control of the flow is provided in this way.

  2. TCP connections are established and terminated
    typically the server side is implemented by calling Socket,bind and listen "passive open" , which the client implements by calling Socket,connect " Active open ", TCP connects through three handshake

    Terminating a TCP connection requires 4 sub-sections:

    • actively invoke Close's application process execution active close , At this point, a fin subsection is sent to indicate that the data has been sent, and
    • receives the fin 's peer execution passive shutdown . At this point, the FIN is responsible for confirmation by the kernel TCP, and the user program does not need to be the kernel is actively responding , While the kernel passes a file terminator (End-of-file) to the receiving end of the application (after any other data waiting for the application to receive), the receive of fin means that the receiving application process has no data to receive on the corresponding connection.
    • After a period of time, when the application receives this file terminator, the application calls close to actively close its socket , which causes its TCP to also send a FIN . The
    • receives the original send-side TCP of this final FIN (the one on which the active shutdown is performed), confirming that the fin is being processed by the kernel TCP protocol.

    11 state transitions for TCP:

    TCP open and simultaneous shutdown:

    Time_wait status two reasons for existence: (see more)

    • The termination of the TCP full-duplex connection is reliably implemented, and
    • allows the old repeating section to disappear in the network.
  3. TCP port number and concurrent Server
    TCP cannot detach foreign nodes to different endpoints simply by viewing destination port number , you must view all 4 elements of a socket pair To determine which endpoint to receive an incoming subsection.

    {IP address of the receiving interface: Service port number, IP address of client: port number of client}//server 4-element socket to
    {client exit IP address: Client ephemeral port number, IP address: Service port number}//client 4-element socket pair

    For a multi-host listening socket on port 21st, if you do not set the IP address of the listener interface, then the wildcard * indicates that any IP address listening to this host (the IP address of the server) can be specified in Unix by Soaddr_any. Set the IP Address field in the address structure of the socket to Soaddr_any before calling bind;

    When the client host initiates a customer to initiate an active open, The IP address of the specified server is 12.106.32.254, the server side is processed by calling Fork to generate a child process, at which point there are two sockets, one is a listening socket, and the other is a socket that is connected to the client:

    When there are multiple customer requests, The case of a socket pair is as follows:

    If a subsection is from 206.168.112.219:1500, the destination address is: 12.106.32.254:21, which is passed to the first process for processing;
    If a subsection is from 206.168.112.219:1501, the destination address is: 12.106.32.254:21, which is passed to the second process for processing;
    All destination port numbers are 21 Other TCP sections are delivered to the parent process that listens for sockets.

  4. Buffer size and limits
    Each TCP socket has a send buffer that can be changed by the SO_SNDBUF socket option to change the size of the buffer, and when write is called in a process, the kernel copies all the data from the buffer of the application process to the sending buffer of the written socket, If the buffer does not contain all the data of the application process (the data size of the sending buffer is too small or the data is present in the sending buffer), write will block until there is space in the sending buffer for the data sent by the application process. when the application process returns from Write only indicates that the buffer can be reused for the original application process, it does not represent the peer to receive the data.

    The output queue here needs to be noted that if the output queue is full, the new groupings will be discarded, and an error is returned along the stack, and the section is re-transmitted at some point. The data in the socket buffer is not deleted until the peer acknowledgement is received.

    For UDP, the kernel does not maintain a socket sending buffer, but the size of the sending buffer can still be set by SO_SNDBUF, and if the application writes a datagram larger than the buffer size, the Emsgsize error will be returned. In fact, there is no socket buffer, because UDP does not need to handle the time-out retransmission, in the same way, the write return success does not indicate that the peer to receive data, only that the datagram is added to the data link layer of the output queue, and if the queue is full, drop datagrams, the kernel may return an error may not return an error, Depending on the implementation.

UNP (a): TCP, UDP protocol in the network programming angle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.