1.2 Distributed-Network communication protocol

Source: Internet
Author: User

Network protocol: TCP/IP and UDP/IP

Tcp / ip

TCP/IP (Transmission Control protocol/internet Protocol) is a reliable network data Transmission Control protocol. Defines standards for how hosts connect to the Internet and how data is transferred between them.

The TCP/IP protocol Reference Model classifies all TCP/IP series protocols into four abstraction layers;

Each abstraction layer is built on the service provided on the lower tier and serves the upper tier

ICMP: Control Message Protocol

Igmp:internet Group Management Protocol

ARP: Address Resolution Protocol

RARP: Reverse Address Conversion Protocol

OSI model (open System Interconnect Communication Reference Model), a standard framework proposed by the International Organization for Standardization, which attempts to interconnect computers worldwide as a network

The OSI model has multiple expression layers, a session layer

3-Time Handshake protocol

The so-called three-time handshake (three-way handshake) is the establishment of a TCP connection, that is, the establishment of a TCP connection, the client and the server must send a total of 3 packets to confirm the establishment of the connection

(1) First handshake: The client will set the flag bit SYN to 1, randomly generate a value seq=j, and send the data packets to server,client into the syn_sent state, waiting for the server to confirm.

(2) Second handshake: After the server receives the packet by the flag bit syn=1 knows the client request to establish a connection, the server sets the flag bit SYN and ACK to 1,ack=j+1, randomly generates a value seq=k, and sends the data packets to the client to confirm the connection request , the server enters the SYN_RCVD state.

(3) Third handshake: After the client receives the acknowledgment, checks whether the ACK is j+1,ack 1, and if correct, resets the flag bit ACK to 1,ack=k+1 and sends the data packets to Server,server to check if the ACK is K+1,ack 1, If the connection is successful, the client and server enter the established state, complete three handshake, and then the client and server can start transmitting data.

SYN attack:

In the three-time handshake process, after the server sends Syn-ack, the TCP connection before the client's ACK is called a half-connection (Half-open Connect), and the server is in SYN_RCVD state when the ACK is received. The server is transferred to the established state. SYN attack is the client in a short period of time to forge a large number of non-existent IP address, and to the server to continuously send SYN packets, the server replies to confirm the package, and wait for client confirmation, because the source address is not present, so the server needs to continue to resend until time-out, These bogus SYN packets take the time to occupy the disconnected queue, causing the normal SYN request to be discarded because the queue is full, causing network congestion and even system paralysis. SYN attack is a typical DDoS attack, the way to detect a SYN attack is very simple, that is, when the server has a large number of semi-connected state and the source IP address is random, you can conclude that the SYN attack, using the following command can be used to present:

#netstat-nap | grep syn_recv

4 wave Agreements

Three times handshake is familiar, four times wave estimated to listen to less, so-called Four wave (Four-way Wavehand) that is to terminate the TCP connection, that is, when disconnecting a TCP connection, the client and the server need to send a total of 4 packets to confirm the disconnection of the connection

Simplex: Data transfer is only supported in one Direction

Half-duplex: data transfer allows the transmission of the information in two directions, but at some point it is only allowed in one direction, actually a bit like switching direction of the single-work communication

Full-duplex: Data communication allows for the simultaneous transmission in two directions, so full duplex is a combination of two single-mode communication, which requires both the transmitting device and the receiving device to have independent receiving and transmitting capability.

Because the TCP connection is full-duplex, each direction must be closed separately, the principle is that when a party completes the data sending task, send a fin to terminate the connection in this direction, the receipt of a fin just means that there is no data flow in this direction, no longer receive data, However, the data can still be sent on this TCP connection until fin is sent in this direction. The first party to close will perform the active shutdown, while the other side performs a passive shutdown, as described.

(1) First wave: The client sends a fin to turn off the client to server data transfer, the client enters the fin_wait_1 state.

(2) Second wave: After receiving fin, the server sends an ACK to the client, confirming that the sequence number is received sequence number +1 (same as SYN, one fin occupies a serial number), and the server enters the close_wait state.

(3) Third wave: The server sends a fin to shut down the server-to-client data transfer, and the server enters the Last_ack state.

(4) The fourth wave: After the client receives fin, the client enters the TIME_WAIT state, and then sends an ACK to the server, confirming that the serial number is received +1,server enter the closed state, complete four waves.

Principle of TCP Communication

First, for TCP communication, each TCP socket has a send buffer and a receive buffer in the kernel, TCP's full-duplex mode of operation and TCP sliding window is dependent on the two separate buffer and the buffer's fill state.

The receive buffer caches the data to the kernel, and if the application process has not been read by the Read method of the socket, the data is cached in the receive buffer. Regardless of whether the process reads the socket, the data sent to the end is received by the kernel and cached to the socket's kernel receive buffer.

The job that read asks for is to copy the data from the kernel receive buffer into the buffer of the application-level user.

When the process calls the socket's send data, it is generally said that the data is copied from the application-level user's buffer to the socket's kernel send buffer, and then send will return on the upper layer. In other words, when send returns, the data is not necessarily sent to the peer.

What is a sliding window protocol

Both the sender and the receiver maintain a sequence of data frames, called Windows. The sender's window size is confirmed by the receiver to control the sending speed, so that the receiver's cache is not large enough to overflow, while controlling traffic can also avoid network congestion.

The 4,5,6 data frame in the figure below has been sent out, but the associated ack,7,8,9 frame is waiting to be sent. It can be seen that the window size of the sending side is 6, which is informed by the receiving side (in fact must consider the congestion window CWnd, here for the moment consider Cwnd>rwnd). At this point, if the sender receives a 4th ACK, the left edge of the window shrinks to the right, the right edge of the window expands right, and the window slides forward, which means that data frame 10 can also be sent

Understanding the underlying principle of socket read and write data, we can easily understand the "blocking mode": for the process of reading the socket data, if the receive buffer is empty, then the thread of the Read method calling the socket is blocked, knowing that there is data into the receive buffer , and for the thread that writes the data to the socket, if the length of data to be sent is greater than the send buffer spare length, it blocks on the Write method, the message waiting for the send buffer is sent to the network, and then continues sending the next piece of data, looping through the process until the data is written to the send buffer

From the previous analysis of the process, the traditional socket blocking mode directly causes each socket to bind a thread to manipulate the data, any party involved in the communication if the slow processing of data, will directly drag the other side, causing the other side of the thread has to waste a lot of time on I/O waiting, So this is the "flaw" of the socket blocking mode. But this mode in the case of a small number of TCP connection communication, both sides can quickly transfer data, this time the performance is the highest.

1.2 Distributed-Network communication protocol

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.