TCP/IP Detailed learning notes-data Link layer (1)

Source: Internet
Author: User

an overviewIn the whole TCP/IP protocol cluster, the role of the data link layer is to send and receive IP layer data, it is also used to send other protocol data, these protocols are often the IP layer of auxiliary protocols, such as the ARP protocol. There are many different kinds of data link layers, the most common being Ethernet (Ethernet) and Wi-Fi. I think, the data link layer is the local area network, the communication between nodes in the network is not need to go through higher level protocol, such as IP layer. However, due to the variety of this layer is more miscellaneous, so the protocol involved is also more.
The two Ethernet and IEEE 802 lan/man standards first let us have a perceptual understanding of Ethernet. It is generally believed that Ethernet is a network of devices connected to a common cable, each other can communicate with each other, which is an Ethernet.
Then when a device should send data, when to receive data, and how to control the process, which requires a set of specific algorithms on each interface to deal with, that is, the protocol. A more popular protocol is CSMA/CD (Carrier sense Multiple Access with collision detection), which is a carrier-sniffing multi-access technology with collision detection. It works by: before sending the data, listen to the channel is idle, if idle, send the data immediately. If the channel is busy, wait for a period of time until the channel is idle before sending the data. If two or more nodes are sending data at the same time, it is a conflict, stop sending the data immediately, wait for a random time, and then try again. At the same time, there must be only one machine transmitting data on the channel. The protocol is simple, technically easy to implement, and does not need centralized control, so it is widely used, but when the network load increases, the sending time increases, the transmission efficiency drops sharply. Like CSMA/CD, a protocol that controls access to a device, called a Mac (media access Control) protocol, has many kinds of MAC protocols, such as CSMA/CD, which is based on competition, and is coordinated based on implementation, such as planning the time that each device occupies the channel in advance. The early Ethernet transmission speed is probably 10mb/s, then like CSMA/CD this competition-based protocol is more popular, and later with the speed of ascension, gradually to 100mb/s,1000mb/s even 10gb/s, based on the competition and sharing the channel Ethernet slowly become less, Instead, a star topology is in place.
, each device is connected with the switch interface through the cable to form a star-shaped topological structure. The device communicates directly with the switch, sending data or accepting data (full/half duplex). In addition, the end of a very popular access network now is WiFi (Wireless Fidelity), although it is not the same as the wired network, but the frame format and the approximate interface is from the wired network (802.3 standard) borrowed from. Before we go into the details, let's take a look at some of the general framework of the data link layer standards. 1.IEEE 802 lan/man Standard


These protocols can be easily divided into three parts. The first section is 802.2 and 802.1, which defines the LLC (Logical Link Control) and Mac (Medium Access control), where the LLC is used for error control and flow control of a single connection between devices, While the MAC is used to solve multiple devices to compete in the media access, how to allocate the use of the channel, the CSMA/CD we have described earlier is a Mac, LLC will be described in detail later. The second part is 802.3, the standard Ethernet protocol. The third part is 802.11, mainly related to Wi-Fi. The last moment to ignore it.
2. Ethernet frame format in the data link layer to transmit the data unit, we become frames (frame), corresponding to this, the IP layer data unit becomes the packet (packet), the transport layer is a subsection (segment). All Ethernet (802.3) frames are based on the same format. As shown in.
It is necessary to note that, due to historical reasons, the Ethernet frame format is changed, here is the IEEE802.3 standard framing format, because the author is also a beginner, do not understand that today is more than the standard frame format, related issues, please refer to HTTP// Support.huawei.com/ecommunity/bbs/10154435.html
Continue to look at the frame format, starting with 7 bytes, called the Preamble (preamble), for synchronization, the next byte is the start frame flag (SFD), the SFD value is fixed, is 0xAB. The next 12 bytes are the destination address (DST) and the source address (SRC), which is the MAC address, physical address, or hardware address that we often say. Of course, the destination address here can refer to the MAC address of a particular single device, or multiple addresses, which is broadcast or multicast. Immediately following the original Address field is the length or type field, and some common values for this field are 0x800 on behalf of IPV4,0X86DD on behalf of ipv6,0x0806 for the ARP protocol, and if it is 0x8100 it can represent a virtual LAN. The basic Ethernet frame length is 1518 bytes, but the latest standard expands to 2000 bytes. Then there is the data or payload section, which is typically the protocol data unit (PDU) of the previous network layer, such as the packet of the IP layer. The payload is typically 1500 bytes, and if less than 1500, 0 bytes are added later to guarantee the required length. The four bytes after the payload is called the frame Check sequence (FCS), the purpose is to let the receiving frame of the network card or interface to determine whether the frame sent over the error occurred. The process is as follows: When the sender assembles a frame data, it performs a CRC calculation for all bytes (typically from the destination address to the payload section) and puts the result into the FCS field, which is transferred to the destination port as part of the frame. When the destination port receives this frame of data, it uses the same method to do the CRC calculation, and compares the result with the FCS field, if the same data is correct, otherwise the data is wrong and may need to be discarded. In addition, it is necessary to explain the size of the frame. The Ethernet frame size is limited and has the maximum and minimum values. The minimum value is 64 bytes, which must include 48 bytes of payload, and of course, the shortfall will need to fill 0. The maximum value is typically 1518 bytes, including 4-byte FCS and a 14-byte protocol header. What is the benefit of this, if an error occurs, such as an FCS check error is detected, then only the 1518 bytes of data will be discarded, the loss is smaller. The downside is that if you need to send a large amount of data, then you need to send a lot of frames of data, each frame of data contains FCS and protocol header, which makes the transmission efficiency is low, there is a tradeoff between the problem.
The above is the 802.3 standard Ethernet frame format of the approximate content, some look very strange, barely understand, but a bit vague, may soon forget. The way to understand it is to take a look at the data of an Ethernet frame that is being transmitted, and here is the recommended tool Wireshark. Specific use of the method is not detailed, you can Google a bit.
3. Virtual LAN We mentioned before, now connected to the local area network more and more use of switches to form a star topology. The advantage of this is that any device can communicate directly with other devices in the network, and it is also convenient to broadcast or multicast. But the disadvantage is that any device can be easily broadcast and multicast, which will bring security risks, such as broadcast storms. In order to solve this problem (and, of course, this is not the reason), a virtual local area network is present. A virtual local area network is a set of logical devices and users that are not subject to physical location restrictions. Different virtual LANs do not communicate as they do in a local area network, even if they are actually in the same LAN. Therefore, if a large local area network is divided into multiple virtual local area networks, the possibility of broadcast storms can be greatly reduced. In addition, devices that are not on the same LAN can also be added to the same virtual LAN, so we say that virtual LANs are a logical concept, logically on the same LAN. There are many ways to partition a virtual local area network. (1) Divide the VLAN by port. For example, a switch 1, 2, 3, 4 is divided into virtual LAN A, 5, 6, 7, 8 is divided into virtual LAN B. This approach is the most common and very straightforward, with the disadvantage of being confined to the same switch. (2) The VLAN is divided according to the MAC address. divided by the MAC address of the host. The advantage of this partitioning method is that when the user is physically moving, such as switching from one port to another, or even switching from one switch to another, there is no need to reconfigure, and the disadvantage is that when the configuration is initialized, the efficiency is low, and if hundreds of devices need to be configured, the workload is very large. In addition, for example, the laptop may sometimes change the network card, which requires reconfiguration. (3) According to the IP address partition VLAN, etc., no longer elaborate. The basic VLAN is developed by the 802.1Q standard, the VLAN header is added on the basis of Ethernet frame, the user is divided into smaller workgroup with VLAN ID, each workgroup (same VLAN ID) is a virtual local area network, and user access between different workgroups is limited.
4 Link Aggregation (link Aggregation) Link aggregation means that multiple physical ports are bundled together to become a logical port. For example, a system with only two low-speed ports, you can use the link aggregation technology to say two ports bundle, let the system get a "high-speed port", the speed of the high-speed port is greater than the maximum speed of two low-speed ports, but less than the sum of two port speed. In fact, this high-speed port is only logical, and in practice it is shared by two member ports, as if it were a port. Of course, when to use which port to transfer, and how to know which ports can be used for link aggregation, this requires some protocol to control.
Three full-Duplex, power Save, auto-negotiation (autonegotiation), and flow control Ethernet are only available in half-duplex mode for data transfer through shared cabling. Data can only be transmitted one way at a time. With the development of Ethernet, there are switches that no longer use shared cables, and devices can exchange data with each other at the same time, rather than one-way transmission. Especially with full-duplex transmission, there is no need to worry about conflict detection (CSMA/CD). However, there is a problem, the new device full-duplex, high-speed, the old device may only support half-duplex, the rate may be lower, we must configure the ports that need to communicate to the appropriate mode of operation in order to communicate properly. This is more troublesome, fortunately, many interfaces have auto-negotiation mechanism, it can tell each other's own network speed, duplex status and other information, and receive the other's information, so that automatically negotiate an optimal working status. This is the auto-negotiation mechanism (autonegotiation).
1. Duplex mismatch (Duplex mismatch) when the two sides of the communication use a different duplex mode, or when one of the parties has auto-negotiation function But the other side does not have the function, there will be some bad phenomenon, we call the duplex mismatch (Duplex mismatch). It does not cause communication failure, but it can degrade the performance of the communication. Let me give you an example. When there is a large amount of data that needs to be transmitted to each other, the half-duplex interface detects that data is transmitted (CSMA/CD) when it sends the data, stopping the transmission and discarding the received data, which results in a loss of data that needs to be retransmitted, such as the IP layer or transport layer, which results in degraded performance. However, this phenomenon is not easy to detect, because when the transmission load is low, half-duplex interface is less likely to detect collisions (low load, the probability of sending data is small, the probability of sending data and receiving data is less).
2. LAN automatic wake-up, power saving function, Magic packets in Linux and Windows systems, have automatic wake-up function, automatic wake-up means that the computer from low-power mode (sleep mode) switch to full power mode (full-power). This can be done by setting it up, for example, if the system receives certain data, it can trigger the automatic wake-up function. Commonly used are the following: Physical layer Active (p), unicast to the device's frame (U), broadcast frame (b), multicast frame (m), ARP frame (a), and magic Packet frame (g), etc., can be set so that when the device receives a certain type of frame data, it can automatically wake up. It is important to note that this magic packet is usually encapsulated by UDP with a specific frame format, which also contains the MAC address of the destination device, and perhaps a password. If a device is set to wake up magic Packet, its network card will receive the Magic packet data and determine if the MAC address and password match itself, and if so, automatically wake up. I think the magic packet means that only certain frames can be awakened, not arbitrary u, b, M, or a (and perhaps inadvertently) wake them up.
3. Data link layer flow control when multiple devices send data to the same destination device, or when the data is sent faster than the speed at which it is transmitted, the switch needs to save the data before it is sent to the destination device, and if the storage time is too long, the data will be lost. Therefore, flow control is required. The principle of flow control is very simple, when sending the above phenomenon (data loss), the switch will send a pause (pause) signal to the sender, let it stop sending or reduce the speed of transmission, thereby alleviating the network congestion level. Where the pause signal is included in the Ethernet frame format described above. But the traffic control of the data link layer is not widely used, because it has a very obvious disadvantage. When a switch is overloaded, it sends a pause signal to all devices that send data to it, but it is possible that the switch is overloaded with one of the devices, causing the error to send a pause signal to other devices.
A quad bridge and switch bridge, or switch, is a device that connects multiple data link layer networks (Ethernet) or multiple devices into a single network. For example, two switches connect two LANs to a large LAN.
Each switch has its own physical address, compared to a and B in. Also, after a network connection, through a period of learning, each switch will know which physical address each port leads to. The switch stores this information as a single table (filtering databases). The tables stored in the two switches look similar to this:
When a switch is started for the first time, the table is empty, at which point it knows its MAC address, does not know any other addresses, and which devices each port leads to. If the switch receives a frame with a destination address other than its own MAC address, it copies the frame data and sends them through each port, of course, the port that receives the data is not sent. Because at this point the switch does not know where the destination port is, from which port to send out, so we have to all the ports are sent a copy. If the switch does not learn to get to this table, then each data frame will be transmitted in such a way, obviously, this is very inefficient. Thus, learning to obtain the corresponding MAC address of each port is a very important function of the switch. Today's computer systems are almost always configured to make them a switch.
1. Spanning-Tree protocol (Spanning tree Protocol STP)
Considering the above situation, what happens when the ABCD four switches first start, station S sends a frame of data? Switch B sends a copy of the data from Port 7, 8, and 9th, and a receives the data from Port 1, sends it from Ports 2 and 3rd, and C and D similar operations, which generate a large amount of redundant data, causing network congestion. To avoid this, a spanning tree (STP) protocol has been invented. The basic idea of STP protocol is very simple. As you know, the trees that grow in nature do not have loops, and if the network can grow like a tree, there will be no loops. So the purpose of STP is to construct a natural number method to achieve the purpose of cutting redundant loops, and to realize the optimization of link backup and path. The algorithm used to construct the tree is called the Spanning tree algorithm (Spannin tree algorithm). To implement these functions, there must be some exchange of information between the switches, known as configuration message BPDUs (Bridge Protocol Data unit). It is similar to the Ethernet frame data, the destination MAC address is a multicast address, all the switches that support STP protocol will receive and process the received BPDU messages. The Spanning tree protocol works roughly as follows: first, the Root bridge is elected. The election is based on the bridge ID of the switch priority combined with its MAC address (bridge ID), with the least-valued switch or bridge becoming the root bridge. The port that passes from the root bridge becomes the root port of the switch, and the other ports are blocked. This creates a tree. However, when the topology changes, the new configuration message has to pass a certain delay to propagate to the entire network, this delay is called (Forward delay). It often takes a long time to converge, so on the basis of STP protocol, there are many improved STP protocols. No more details here.

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

TCP/IP Detailed learning notes-data Link layer (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.