OSI and layered models
OSI layering (layer 7): Physical layer, Data link layer, network layer, transport layer, Session layer, presentation layer, application layer.
TCP/IP tiering (layer 4): network interface layer, internetwork layer, Transport layer, application layer.
Five layer protocol (layer 5): Physical layer, Data link layer, network layer, Transport layer, application layer
The OSI 7-layer model is primarily a theoretical study, while the 4-tier TCP/IP model is actually used (and the TCP/IP layer of the 4th layer of the network interface does not have any actual content). The 5-tier model is a tradeoff between the 7-tier model and the TCP/IP four-tier model, and is only used to learn the principles of the network.
Each layer of the OSI 7-layer model functions as follows:
1. Physical layer: Transmission of bits via media, determination of mechanical and electrical specifications (bit bit)
2. Data Link layer: Assemble bits into frames and point-to-place transfers (frame frames)
3. Network layer: Responsible for packet transfer from source to host and Internetwork (package packet)
4. Transport Layer: Provides end-to-end reliable message delivery and error recovery (segment segment)
5. Session Layer: Establish, manage, and terminate sessions (Session Protocol data Unit SPDU)
6. Presentation layer: Translating, encrypting, and compressing data (representing protocol data unit PPDU)
7. Application layer: means to allow access to the OSI environment (Application Protocol Data unit APDU)
Each layer of the 5-layer theoretical analysis model functions as follows:
1. Application layer: Provide services directly to the user's application process
2. Transport layer: Responsible for the inter-process communication between two hosts to provide services
3. Network layer: Responsible for the packet switching network of different hosts to provide communication services; Select the appropriate routing packet
4. Data Link Layer: Packets delivered down the network layer are encapsulated as frames, passing the data in the frame on the link of the neighboring node
5. Physical layer: transparent transfer bit stream
IP Address Classification
Class A address: Starting with 0, first byte range: 0~127 (1.0.0.0-126.255.255.255)
Class B Address: Starting with 10, first byte range: 128~191 (128.0.0.0-191.255.255.255)
Class C Address: Starting with 110, the first byte range: 192~223 (192.0.0.0-223.255.255.255)
Class D Address: Starting with 1110, as a multicast address
Reserved address: 10.0.0.0-10.255.255.255, 172.16.0.0-172.31.255.255, 192.168.0.0-192.168.255.255.
The difference between TCP and UDP
TCP (transmission Control Protocol, transmission Protocol) is a connection-oriented protocol, which means that a reliable connection must be established with each other before sending or receiving data.
UDP (User data Protocol, Subscriber Datagram Protocol) is a non-connected protocol that transmits data before the source and terminal do not establish a connection, and when it wants to transmit it simply crawls data from the application and throws it to the network as quickly as possible.
The difference between the two:
1. Based on connection and no connection;
2. The requirements of the system resources TCP more, less UDP;
3. The structure of the UDP program is relatively simple;
4. Stream mode (TCP regards data as unstructured byte stream) and datagram mode (UDP is datagram-oriented);
5. TCP guarantees data correctness, UDP may drop packets, TCP guarantees data order, UDP does not guarantee
TCP Packet Header format
As for the meaning of each field in the header format, just look it up.
Explanation: http://blog.csdn.net/wilsonpeng3/article/details/12869233
The three-time handshake and four wave waving process of TCP and the function of timewait
Three-time handshake to establish the connection:
First, the client sends the connection request message, the Server section accepts the connection and replies to the ACK message, and allocates resources for this connection. After the client receives an ACK message, an ACK message is sent to the server segment, and the resource is allocated, so that the TCP connection establishes a
If the three-time handshake is modified to two handshakes, there is a case of an invalid connection caused by a "Failed connection request packet segment". Assume that 2 handshakes are used in the following scenario. The client sends a request message segment to the server, and the segment is blocked somewhere in the network, then the Clent timeout sends a connection request message segment, where the server receives a second connection request segment, allocates resources for the connection, establishes a connection, and sends data on the connection. The connection is then disconnected. If the connection is broken, and the previously blocked request creation message is sent to the server, then if it is a 2-time handshake, the server establishes an invalid connection for the late invalid request message, and the client does not have the data to be sent to the server, so the resource is wasted.
Four waves to disconnect from the connection:
The interrupt connection can be either the client side or the server side. Suppose the client side initiates an interrupt connection request, which is to send a fin message. After the server has received the FIN message, it means "the client has no data to send to you", but if the server still has data not being sent, you do not have to close the socket to continue sending the data. So the server sends an ACK first, "Tell the client that the request was received, but I'm not ready yet, please continue to wait for my message." At this point the client enters the fin_wait state and continues to wait for Fin messages on the server side. When the server side determines that the data has been sent, the fin message is sent to the client side, "Tell the client side, OK, my side of the data is finished, ready to close the connection." Client side received fin message, "I know can shut down the connection, but he still do not believe the network, afraid the server side do not know to shut down, so send an ACK into the time_wait state, if the server does not receive an ACK can be re-transmitted. "When the server side receives an ACK," You know you can disconnect. " The client side waits for 2MSL and still does not receive the reply, then proves that the server side has shut down properly, then the client side can also shut down the connection.
Disconnect requires four times because there is a section of the requested disconnection, there may be a disconnect request when the data needs to be sent, so only reply to the request, but can not immediately disconnect, that is, when the disconnection, both sides need to send a fin message when they do not send data, And get each other's ack.
As for the last time-wait to wait for 2MSL (twice times the maximum message lifetime), there are two main points:
1. Ensure that the request disconnects one end of a confirmation that the closing ACK must be sent to the other party B. When B receives the last ACK of a, there is a case that the ACK is missing, so a fin+ack can receive a B retransmission during the wait for 2MSL. A resend the acknowledgment and reset the 2MSL timer after receiving the retransmission Fin+ack. This ensures that B can shut down normally, and if the wait time is less than 2MSL, then after a is closed, B is not able to close because it has not received the ACK and has been re-fin+ack.
2. Prevent the "Failed connection request message segment" from appearing. Because after the last ACK is sent, waiting for 2MSL guarantees that all messages generated during the connection will disappear, ensuring that the new connection is not affected by the blocking request connection message in the previous connection (this is explained in detail in UNP).
Add:
TCP KeepAlive Timer: Assume that the client and the server establish a TCP connection, but at this time the client failure rather than the normal disconnection of the connection, the server does not know whether the connection between the client exists, it may be unnecessary to wait on the failed connection. Therefore, it is necessary to adopt a mechanism to avoid, the service side of the action is to receive the client's data each time a keepalive timer (about 2 hours), if the keepalive timer after the server still does not receive data, then the service side sends a probe message, every 75 minutes to send one, 10 consecutive messages do not receive a response from the client, and the client is considered unexpectedly disconnected.
TCP Congestion Control
The bandwidth in the computer network, the cache and the processing machine in the Exchange node are all the resources of the network. At some point in time, the performance of the network becomes worse if the demand for a resource in the network exceeds the available parts that the resource can provide. This situation is called congestion .
congestion control is to prevent too much data from being injected into the network, so that routers or links in the network are not overloaded. Congestion Control is a global process, and traffic control is different, traffic control point traffic control, flow control is to let the data sent to slow down the data transmission speed.
Congestion Control Method:
1. Slow Start (Slow-start)
2. Congestion avoidance (congestion avoidance)
3. Fast retransmission (Fast retransmit)
4. Fast recovery (Fast recovery)
Main process of TCP congestion control
Slow start phase:
When a new TCP connection is established, the congestion window (congestion Window,cwnd) is initialized to a packet size. The source side sends data by CWnd size, and each time an ACK acknowledgement is received, CWnd increases the amount of packets sent, so that CWnd will grow exponentially with the loop response time (Round trip Time,rtt), and the amount of data sent from the source to the network will increase sharply. Due to congestion, the congestion window is halved or dropped to 1, so slow start ensures that the source is sending at most the same rate as twice times the link bandwidth.
Congestion Avoidance phase:
If the TCP source-side discovery times out or receives 3 identical ACK replicas, the network is considered congested (mainly because of the small probability of packet corruption and loss caused by the transport (<<1%)). At this point the congestion avoidance phase is entered. The Slow Boot threshold (Ssthresh) is set to half the current congested window size , and if timed out, the congestion window is placed 1. If CWND>SSTHRESH,TCP performs the congestion avoidance algorithm, at this time,CWnd only adds 1/cwnd packets each time an ACK is received, so that in a RTT, CWnd will increase by 1, so in the congestion avoidance phase, CWnd does not grow exponentially, But linear growth .
Fast retransmit and fast recovery phases:
Fast retransmission is when the TCP source receives three identical ACK copies, that is, if a packet is lost, the source end multiplicity passes the lost packet without waiting for the RTO to time out . The Ssthresh is also set to half the current CWnd value, and the CWnd is reduced to half the original. Fast recovery is based on the principle of "Data Baoshou" (conservation of packets principle) of "piping model", that is, the number of packets transmitted across the network at the same time is constant, and only when "old" packets leave the network can they be sent " New "packets into the network. If the sender receives a duplicate ACK, it is assumed that a packet has left the network, so the congestion window is added 1 (here 1 is 1 segment length, actually received 3 duplicate ACK, the window increases by 3).
TCP sliding window and fallback N (go-back-n)
The basic principle of the sliding window protocol is that at any given moment, the sender maintains a sequential sequence of allowed frames, called the sending window, and the receiver maintains a sequential sequence of allowed frames, called the receive window. The upper and lower bounds of the sequence number of the sending and receiving windows do not have to be the same, even the size can be different . Different Sliding window Protocol window sizes are generally different. the serial number in the sender's window represents the frames that have been sent, but not yet confirmed, or those that can be sent .
In the back N protocol, after sending a data frame, the sender stops waiting for the reply frame, but sends several data frames continuously, even if it receives the reply frame from the receiving party during the continuous sending process, it can continue to send. And the confirmation of the frame using the method of cumulative confirmation , that is, the receiver does not receive the frame one by one confirmation, but the child received a number of groupings, the sequential arrival of successive groupings of the last packet to send an acknowledgment , representing the group until the grouping of all groups are confirmed to receive. The advantage of this cumulative acknowledgement is that it is easier to implement and does not have to be re-transmitted for missing acknowledgments. However, the disadvantage is that it is not possible to correctly reflect the accepted groupings to the sender. For example, the current sender sends 5 consecutive groupings, where the third packet is lost and the other four are correctly received, so the sender does not know the actual reception of the next 3 groupings because only the previous two groupings can be confirmed. Then you need to re-pass the 3 subsequent groupings. This is the fallback n frame, which needs to be rolled back and re-routed the n groupings that have been sent . This kind of protocol in poor communication quality situation will become very low efficiency, resulting in a large number of repeated packet transmission.
Summary of "Network" TCP Basics