2018.10.11----2018.10.13 computer network (wrote for two days)

Source: Internet
Author: User
Tags ack

    • The structure and function of the OSI and TCP/IP layers, what are the protocols
      • Architecture of the five-tier protocol
      • 1 Application Layer
        • Domain Name System
        • HTTP protocol
      • 2 Transport Layer
        • Transport layer mainly uses the following two kinds of protocols
        • The main features of UDP
        • The main features of TCP
      • 3 Network Layer
      • 4 Data Link Layer
      • 5 Physical Layer
      • Summarize
    • Two TCP three handshake and four waves (interviewing regulars)
      • Why do you have to shake hands three times
      • Why do you want to return the SYN?
      • Why do you have to send an ACK if you send SYN?
      • Why are you waving four times?
    • Three TCP and UDP protocol differences
    • How the four TCP protocols guarantee reliable transmission
      • Stop waiting for protocol
      • Automatic Retransmission Request ARQ protocol
      • Continuous ARQ protocol
      • Sliding window
      • Flow control
      • Congestion control
    • Five enter the URL address in the browser->> the process of displaying the home page (interview regulars)
    • Six Status Codes
    • Vii. relationship between various protocols and HTTP protocols
    • Eight HTTP long connections, short connections
    • Written in the last
      • Review of computer network frequently asked questions
      • Suggestions

Relative to the previous version of the computer computer interview Knowledge Summary, this version added "How the TCP protocol guarantees reliable transmission" including time-out retransmission, stop-wait protocol, sliding window, traffic control, congestion control and other content and some of the existing content has been supplemented.

The structure and function of the OSI and TCP/IP layers, and what protocols are in the architecture of the five layer protocol

When learning computer networks we generally use a compromise approach, that is, the advantages of OSI and TCP/IP, using a five-layer protocol-based architecture, which is both concise and can be explained clearly.

Combined with the situation of the Internet, from top to bottom, very brief introduction of the role of each layer.

1 Application Layer

the task of the Application layer (Application-layer) is to accomplish specific network applications by applying inter-process interactions. the application-layer protocol defines the rules for communication and interaction between application processes (processes: Programs that are running in the host). Different application layer protocols are required for different network applications. There are many application layer protocols in the Internet, such as DNS of domain Name System, HTTP protocol supporting Web application, SMTP protocol supporting e-mail, and so on. The data unit in which we interact with the application layer is called a message.

Domain Name System

Domain Name System abbreviation Dns,domain name is a core service of the Internet, as a distributed database that can map domain names and IP addresses to each other, making it easier for people to access the Internet. Instead of remembering the number of IP strings that can be read directly by the machine. (Baidu Encyclopedia) For example: A company's web site can be viewed as its online portal, and the domain name is the same as the address of the house, usually the domain name is used by the company's names or abbreviations. For example, the Microsoft domain name mentioned above, similar also: IBM company domain name is www.ibm.com, Oracle company domain name is www.oracle.com, Cisco company domain name is www.cisco.com etc.

HTTP protocol

Hypertext Transfer Protocol (Http,hypertext Transfer Protocol) is one of the most widely used network protocols on the Internet. All WWW documents must comply with this standard. HTTP was originally designed to provide a way to publish and receive HTML pages. (Baidu Encyclopedia)

2 Transport Layer

the primary task of the Transport Layer (transport layer) is to provide a common data transfer service for communication between two host processes . The application process uses the service to deliver the application beginning text. "GENERIC" means not for a particular network application, but for multiple applications to use the same transport layer service. Because a host can run multiple threads at the same time, the transport layer has the functionality of multiplexing and splitting. The so-called reuse refers to multiple application layer processes can use the following transport layer services, split and reuse instead, is the transport layer of the received information delivered to the corresponding process in the application layer.

Transport layer mainly uses the following two kinds of protocols
    1. TCP(Transmisson Control Protocol)-Provides connection- oriented , reliable data transfer services.
    2. Subscriber Data Protocol UDP(user Datagram Protocol)-provides non-connected , best-effort data transfer services ( no guarantee of data transmission reliability ).
The main features of UDP
    1. UDP is non-connected;
    2. UDP uses its best effort to deliver, that is, it does not guarantee reliable delivery, so the host does not need to maintain a complex link state (there are many parameters);
    3. UDP is message-oriented;
    4. UDP does not have congestion control, so congestion on the network does not reduce the sending rate of the source host (useful for real-time applications, such as live streaming, real-time video conferencing, etc.);
    5. UDP supports a pair of one or one-to-many, many-to-one, and many-to-many interactive communications;
    6. UDP has a small header overhead, only 8 bytes, and is shorter than the 20-byte header of TCP.
The main features of TCP
    1. TCP is connection-oriented. (Like a phone call, you need to dial to establish a connection before the call, to hang up after the call to release the connection);
    2. Each TCP connection can have only two endpoints, and each TCP connection can only be point-to-point (one-to-one);
    3. TCP provides reliable delivery of services. Data transmitted over a TCP connection is error-free, not lost, not duplicated, and arrives sequentially;
    4. TCP provides full-duplex communication. TCP allows the application process on both sides of the communication to send data at all times. Both ends of the TCP connection are provided with send cache and receive cache, which are used to temporarily hold the data of the two sides ' communication;
    5. byte stream oriented. A stream in TCP refers to a sequence of bytes flowing into or out of a process. The meaning of "byte stream oriented" is that although the interaction between an application and TCP is a block of data at a time (varying in size), TCP simply sees the application's data as a series of unstructured byte streams.
3 Network Layer

There may be many data links between the two computers that communicate in the computer network, or they may go through many communication subnets. The task of the network layer is to select the appropriate inter-network routing and switching nodes to ensure the timely transmission of data. at the time of sending data, the network layer transmits the packets and packets that are generated by the transport layer or the User datagram encapsulated component groups and packages. In the TCP/IP architecture, because the network layer uses the IP protocol , the packet is also called the IP datagram, referred to as the datagram.

Note here: do not confuse the transport layer's "User datagram UDP" with the "IP datagram" of the network layer . In addition, no matter what layer of data units, can be generally used as "grouping" to represent.

It is emphasized here that the word "network" in the network layer is not the specific network we usually talk about, but refers to the name of the third layer in the computer network architecture model.

The internet is interconnected by a large number of heterogeneous (heterogeneous) networks through routers (router). The network layer protocol used by the Internet is a non-connected Internet Protocol (Intert Prococol) and many routing protocols, so the network layer of the Internet is also called the internetwork layer or the IP layer .

4 Data Link Layer

The data link layer is usually referred to as the link layer. Data transfer between two hosts is always transmitted over a period of links, which requires the use of a dedicated link layer protocol. when data is transferred between two neighboring nodes, the Data link layer assembles the IP datagram coroutine the network layer and transmits the frames on the link between the two adjacent nodes. Each frame includes data and necessary control information (such as synchronization information, address information, error control, etc.).

When receiving data, the control information enables the receiving side to know which bit a frame starts from and to which end. In this way, the data link layer, after receiving a frame, can be presented in the data part, handed over to the network layer.
The control information also enables the receiving side to detect errors in the received frames. If a mistake is found, the data link layer simply discards the wrong frame to avoid wasting network resources in order to continue to be transmitted over the network. If there is a need to correct the data in the link layer transmission error (that is, the data link layer is not only to check the error, but also to correct), then the reliability of transmission protocol should be used to rectify the error. This approach makes the protocol of the link layer more complex.

5 Physical Layer

The units of data transmitted on the physical layer are bits.
the function of the physical layer (physical layer) is to realize the transparent transmission of bitstream between neighboring computer nodes, and to shield the difference between specific transmission media and physical equipment as much as possible. the data link layer above does not have to consider what the network's specific transport media is. The "Transparent transmit bitstream" indicates that the bit stream has not changed after the actual circuit is transmitted, and the circuit seems to be invisible to the transmitted bit stream.

The most important and famous of all the various co-operation in the Internet is the TCP/IP two protocol. Now often mentioned TCP/IP does not necessarily refer to both TCP and IPs specific protocols, and often represents the entire TCP/IP protocol family used by the Internet.

Summarize

Above we have a preliminary understanding of the five-tier architecture of the computer network, the following is a seven-layer architecture diagram summarized. Image source: 7064869

Two TCP three handshake and four waves (interviewing regulars)

To accurately reach the target, the TCP protocol employs a three-time handshake strategy.

Cartoon Illustration:

Image source: "Graphic http"

Simple:

    • Client – Send a packet with a SYN flag – one handshake – server
    • Server – Send packet with Syn/ack flag – two handshake – client
    • Client – Sends a packet with an ACK flag – three handshake – server
Why do you have to shake hands three times

Three times the purpose of the handshake is to establish a reliable communication channel, speaking of communication, simply is the data sent and received, and three times the main purpose of the handshake is the two sides to confirm their transmission and reception is normal.

First handshake: The Client cannot confirm anything; the Server confirms that the other party is sending a normal

The second handshake: Client confirmed that: send, receive the normal, the other side send, receive normal; Server confirmed: Their reception is normal, the other side sent normal

Third handshake: Client confirmed that: send, receive the normal, the other side send, receive normal; The Server confirms that it sends, receives the normal, the other party sends the reception normal

So three times handshake can confirm that the dual-send and receive functions are normal, indispensable.

Why do you want to return the SYN?

The receiving end sends back the SYN sent by the sender to tell the sender that the message I received is really the signal you sent.

SYN is the handshake signal used when TCP/IP establishes a connection. When a normal TCP network connection is established between the client and the server, the client first issues a SYN message, the server uses the Syn-ack answer to indicate that the message was received, and the client then sends an ACK (acknowledgement[: Confirm character, in the data communication transmission, A Transmission control character that is sent to the receiving station. It indicates that the data sent has been accepted without error. ]) message response. This allows a reliable TCP connection to be established between the client and the server, and the data can be passed between the client and the server.

Why do you have to send an ACK if you send SYN?

The communication between the two parties must be correct both to send information to each other. A SYN is passed to prove that the sender has no problem with the receiver's channel, but the receiver's channel to the sender also needs an ACK signal for verification.

Disconnecting a TCP connection requires "four waves":

    • Client-sends a FIN to shut down the client-to-server data transfer
    • Server-received this FIN, it sends back an ACK, confirming that the serial number is received plus 1. As with SYN, a FIN will occupy a sequence number
    • Server-Closes the connection to the client, sends a fin to the client
    • Client-sends ACK message acknowledgement and sets the confirmation sequence to receive serial number plus 1
Why are you waving four times?

Either party can send a notification of the connection release after the data transfer is complete, and then enter the semi-closed state after confirmation. When the other party does not have the data sent again, the connection release notification is issued, and the other side confirms that the TCP connection is completely closed.

As an example: A and B call, the call is coming to an end, a said "I have nothing to say", B answer "I know", but B may have to say, a can not ask B to follow their own rhythm to end the call, so B may also Balabala said a pass, finally B said "I said", a answer "know", That's the end of the call.

The above-mentioned more summary, recommend a more detailed article: 72861891

Three TCP and UDP protocol differences

UDP does not need to establish a connection before transmitting the data, the remote host after receiving the UDP message, do not need to give any confirmation. Although UDP does not provide reliable delivery, in some cases UDP is the most efficient way to work (usually for instant communication), such as: QQ Voice, QQ video, live broadcast, etc.

TCP provides connection-oriented services. The connection must be established before the data is transferred, and the connection is released after the data transfer is complete. TCP does not provide broadcast or multicast services. Because TCP is to provide reliable, connection-oriented transport services (TCP is reliable reflected in the TCP before the transmission of data, there will be three handshake to establish the connection, and in the data transfer, there is confirmation, window, retransmission, congestion control mechanism, after the data transmission, will also be disconnected to save system resources), This is difficult to avoid by adding a lot of overhead, such as validation, flow control, timers, and connection management. This not only makes the header of the protocol data unit much larger, but also consumes many processor resources. TCP is typically used in scenarios such as file transfer, sending and receiving messages, remote logins, and so on.

How the four TCP protocols guarantee reliable transmission
    1. The application data is split into a block of data that TCP considers most suitable for sending.
    2. TCP numbers each packet sent, and the receiver sorts the packets and sends the ordered data to the application layer.
    3. Checksum: TCP will keep its header and data checked and. This is an end-to-end test and is designed to detect any changes in the data during transmission. If a section is received for verification and error, TCP discards this segment and does not acknowledge receipt of this message segment.
    4. The receiving end of TCP discards duplicate data.
    5. Flow control: Each side of the TCP connection has a fixed-size buffer space, and the TCP receiver only allows the sending side to send data that the receive-side buffers can accept. When the receiving party is too late to process the sender's data, it can prompt the sender to reduce the sending rate and prevent the packet from being lost. The traffic control protocol used by TCP is a variable-sized sliding window protocol. (TCP uses sliding windows for flow control)
    6. Congestion Control: reduce the transmission of data when the network is congested.
    7. stop waiting for the protocol is also to achieve reliable transmission, it is the basic principle of each sent out a packet to stop sending, waiting for the other party to confirm. Send the next group after you receive the confirmation. time-out retransmission: when TCP sends a segment, it initiates a timer, waiting for the destination to acknowledge receipt of the message segment. If a confirmation cannot be received in time, the message segment will be re-sent.
Stop waiting for protocol
    • Stop waiting for the protocol is to achieve reliable transmission, its basic principle is that each sent out a packet to stop sending, waiting for the other party to confirm. After receiving confirmation, send the next group;
    • In the stop waiting protocol, if the receiving party receives the duplicate grouping, it discards the packet, but also sends the acknowledgement;

1) Error-Free situation:

The sender sends the packet, the receiver receives it within the specified time, and replies to the confirmation. The sender sends it again.

2) Error condition (time-out retransmission):

Stop waiting protocol time-out retransmission means that the previously sent packets are re-transmitted as long as they have not been received for more than a period of time (the group that you just sent is missing). Therefore, a timeout timer should be set for each packet that is sent, and its retransmission time should be longer than the average round trip time of the data in the packet transfer. This automatic retransmission method is often called automatic retransmission request ARQ . In addition, if a repeating packet is received in the stop-wait protocol, the packet is discarded, but the acknowledgement is also sent. Continuous ARQ protocol can improve channel utilization. Send maintains a send window, where the packet in the Send window is sent continuously without waiting for the other party to confirm. The receiver generally uses the cumulative acknowledgment to send a confirmation to the last packet arriving in the order, indicating that all the groupings to this grouping location have been received correctly.

3) Confirmation of loss and confirmation of lateness

    • confirm missing : Acknowledgement message lost in transmission process

      When a sends a M1 message and B receives it, B sends a M1 acknowledgment message to a, but it is lost during transmission. And a does not know that after the timeout period, a retransmission M1 message, B again received the message after the following two measures:

      1. Discard this duplicate M1 message and not deliver it to the upper layer.
      2. Send a confirmation message to a. (You don't think you've sent it, and you don't send it anymore.) A can retransmit, which proves that B's acknowledgment message is missing).
    • confirm Late : Confirm message is late in transit

      A sends a M1 message, B receives and sends a confirmation. No acknowledgement message was received within the timeout period, a retransmission M1 message, B still received and continued to send confirmation message (b received 2 copies of M1). At this point a receives a confirmation message sent by B for the second time. Then send additional data. After a while, a received a confirmation message for the first time B sent to M1 (a also received 2 confirmation messages). The process is as follows:
      1. A after receiving duplicate confirmation, discard it directly.
      2. b after receiving duplicate M1, the duplicate M1 is also discarded directly.
Automatic Retransmission Request ARQ protocol

Stop waiting protocol time-out retransmission means that the previously sent packets are re-transmitted as long as they have not been received for more than a period of time (the group that you just sent is missing). Therefore, a timeout timer should be set for each packet that is sent, and its retransmission time should be longer than the average round trip time of the data in the packet transfer. This automatic retransmission method is often called automatic retransmission request ARQ.

Advantages: Simple

Cons: Low channel utilization

Continuous ARQ protocol

Continuous ARQ protocol can improve channel utilization. The sender maintains a send window, where the grouping within the Send window can be sent continuously without waiting for the other party to confirm. The receiver generally uses the cumulative acknowledgment to send a confirmation to the last packet arriving in the order, indicating that all the groupings so far have been received correctly.

Advantages: High channel utilization, easy to implement, even if the acknowledgement is lost, do not have to re-transmit.

Disadvantage: You cannot reflect to the sender the information of all the groupings that the receiver has received correctly. For example: The sender sent 5 messages, the middle of the third Missing (3rd), then the receiver can only send confirmation to the first two. The sender could not know the whereabouts of the last three groupings, but had to re-transmit all the last three. This is also called go-back-n (fallback n), which means that a rollback is required to retransmit the N messages that have been sent.

Sliding window
    • TCP uses sliding windows to implement the flow control mechanism.
    • Sliding windows (Sliding window) is a flow control technology. In the early network communication, the communication parties do not consider the congestion of the network to send data directly. Because people do not know the network congestion situation, while sending data, resulting in intermediate node blocking swap, who also can not send data, so there is a sliding window mechanism to solve the problem.
    • In TCP, a sliding window is used for transmission control, and the size of the sliding window means that the receiver has a large buffer that can be used to receive data. The sender can determine how many bytes of data should be sent by sliding the size of the window. When the sliding window is 0 o'clock, the sender is generally no longer able to send datagrams, except in two cases where emergency data can be sent, for example, to allow the user to terminate the running process on the remote machine. Another scenario is that the sender can send a 1-byte datagram to notify the receiver to re-declare the next byte it wants to receive and the size of the sender's sliding window.
Flow control
    • TCP uses sliding windows to achieve flow control.
    • Flow control is to control the sender sending rate, to ensure that the receiving party is in time to receive.
    • The window field in the acknowledgment message sent by the receiver can be used to control the sender window size, thus affecting the sender's sending rate. When the Window field is set to 0, the sender cannot send data.
Congestion control

At some time, if the demand for a resource in the network exceeds the available parts that the resource can provide, the performance of the network will become worse. This situation is called congestion. Congestion control is designed to prevent excessive data from being injected into the network, so that routers or links in the network are not overloaded. Congestion control has to do with a premise that the network can withstand the existing network load. Congestion control is a global process that involves all hosts, all routers, and all the factors that are associated with reducing network transmission performance. On the contrary, traffic control is often the point-to-point traffic control, is an end-to-end problem. The flow control is to suppress the sending side to send the data rate, so that the receiving end time to receive.

For congestion control, the TCP sender maintains a state variable for the congestion window (CWnd) . The size of the Congestion control window depends on the degree of congestion of the network and is dynamically changing. The sender makes its own sending window the smaller of the congestion window and the receiver's acceptance window.

TCP congestion control employs four algorithms, namely slow start , congestion avoidance , fast retransmission, and fast recovery . At the network layer, the router can also use appropriate packet-discard policies (such as active queue management AQM) to reduce network congestion.

    • slow start: The idea behind the slow start algorithm is that when the host starts sending data, if it immediately injects a large amount of data bytes into the network, it can cause network congestion because the network is not yet known to be compliant. Experience shows that the better method is to detect the first, that is, from small to large gradually increase the sending window, that is, from small to large gradually increase the congestion window value. CWnd has an initial value of 1, and CWnd doubles each pass through a propagation round.
    • congestion avoidance: The idea of congestion avoidance algorithm is to let the congestion window CWnd slowly increase, that is, each passing a round trip time RTT will send the CWnd plus 1.
    • Fast retransmission and fast recovery:
      In TCP/IP, Fast retransmission and recovery (fast retransmit and RECOVERY,FRR) is a congestion control algorithm that can quickly recover lost packets. Without FRR, if the packet is lost, TCP will use a timer to request a transport pause. During this time of the pause, no new or replicated packets are sent. With FRR, if the receiver receives an out-of-order data segment, it immediately sends a duplicate acknowledgment to the transmitter. If the sending machine receives three duplicate confirmations, it assumes that the data segment indicated by the confirmation is missing and immediately re-transmits the missing data segments.  With the FRR, there will be no delay due to retransmission requirements. Fast retransmission and Recovery (FRR) works most efficiently when there is a separate packet loss. When more than one data packet is lost within a short period of time, it does not work very efficiently.
Five enter the URL address in the browser->> the process of displaying the home page (interview regulars)

Baidu seems to like to ask this question most.

Open a Web page, which protocols are used throughout the process

Image source: "Graphic http"

Six Status Codes

Vii. relationship between various protocols and HTTP protocols

The general interviewer will examine your understanding of the computer network knowledge system through such questions.

Image source: "Graphic http"

Eight HTTP long connections, short connections

Short connections are used by default in http/1.0. That is, each time the client and the server make an HTTP operation, a connection is established and the connection is interrupted at the end of the task. When a client browser accesses an HTML or other type of Web page that contains other Web resources (such as JavaScript files, image files, CSS files, and so on), the browser will reestablish an HTTP session each time it encounters such a web resource.

From http/1.1 onwards, a long connection is used by default to maintain the connection characteristics. Using the long-connected HTTP protocol, this line of code is added to the response header:

Connection:keep-alive

In the case of a long connection, when a Web page is opened, the TCP connection between the client and the server for transmitting HTTP data will not be closed, and the client will continue to use this established connection when it accesses the server again. Keep-alive does not permanently keep the connection, it has a hold time that can be set in different server software (such as Apache). Implementing long connections requires both the client and the server to support long connections.

The long and short connections of the HTTP protocol are essentially long connections and short connections to the TCP protocol.

-what exactly is the HTTP long connection, short connection? 》

Written in the last Computer network FAQ review
    • ①TCP three handshakes and four waves,
    • ② Enter the URL address in the browser->> the process of displaying the home page
    • The difference between ③http and HTTPS
    • The difference between ④TCP and UDP protocols
    • ⑤ a common status code.
Suggestions

It is highly recommended that you look at the book "Graphic HTTP", which is not a lot of pages, but the content is very substantial, whether it is used to systematically grasp the knowledge of the Internet or simply to deal with the interview is a great help. Some of the following articles are for reference only. Sophomore study This course, we use the textbook is the "Computer Network Seventh Edition" (Shehiren), do not recommend everyone to read this textbook, the book is very thick and the theory of knowledge, not sure we can be calm reading.

Reference:

52718250

60965466

73743641

2018.10.11----2018.10.13 computer network (wrote for two days)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.