Network Layer Concept Learning (basic concepts, routers, and routing algorithms)

Source: Internet
Author: User

Network Layer Concept Learning (basic concepts, routers, and routing algorithms) the network layer is built on the Link Layer. Its main function is to enable communication between hosts in the network. In the Internet, the IP layer is the most core protocol in the TCP/IP protocol family and one of the most complex layers. I. Overview 1. the forwarding and routing network layer function is to move groups from one host to another so that the hosts can communicate with each other. For this reason, the network layer provides two functions: forwarding: A Router (layer-3 Switch) forwards the packets entering an input link to an output link. It is a local action of a vro to move a group from an input link to an output link. Routing: when a group flows from one host to another, the network layer must determine the path of the group. The algorithm used to calculate the path information is the routing algorithm. It is a network action that determines the path from its source to its destination. The router is an extremely important device at the network layer, and each router is published in a forwarding table. The vro checks the value of a field in the first part of the group and then queries the value in the router forwarding table to determine how to forward the group. The query result is the Link Interface of the router to which the group will be forwarded. The routing algorithm determines the value in the forwarding table. Routing Algorithms may be centralized (executed by a central point) or distributed (running on different routers, all routers must accept routing protocol packets to configure their forwarding tables. Connection establishment: In some computer networks, connection establishment is also an important function of the network layer. For example, ATM requires handshaking from the source to the destination along the selected path to establish a State between the source and destination. 2. network Service functions specific services available at the network layer include (but are not necessarily provided): Ensure the service: ensure that the final arrival of the destination has a delay of the previous session to ensure delivery: not only ensure delivery, in addition, ensure delivery of Ordered Group delivery within the previous delay: ensure that the group reaches the destination in the order they are sent to ensure minimum bandwidth: as long as the sending host records the transmission at a lower rate than the specific bit rate, the maximum latency jitter will not be lost in the group: the difference between the interval at which the sender sends two consecutive groups and the interval at which the receiver receives them is within a certain range of security services: communicate with the sender and receiver only. However, the network layer of the Internet provides non-connection and unreliable services. The best-effort services mean: unreliable: the IP layer does not guarantee that IP datagram can successfully reach the destination. To ensure reliable transmission, use other protocols, such as TCP. No connection: the IP address does not maintain any information about the subsequent datagram status. Each datagram is processed independently of each other. Therefore, multiple packets between two IP addresses may arrive in disorder and may go through different paths... 3. The group is forwarded over the Internet. When a host sends a group, it adds the address of the target host to the group and then sends the group. When a group is transmitted to a destination in a network, it passes through a series of routers. Each vro uses the destination address of the Group to forward it to the group. Each vro is a forwarding table that maps the destination address to the Link interface. When a group arrives, the vro uses the destination address of the group to find an appropriate output Link Interface in the forwarding table, the router then sends the group from the output Link Interface. On the Internet, the forwarding table of a vro can be updated by the routing algorithm or administrator. As the forwarding table modification may occur at any time, the groups between two hosts may go through different network paths at different times and may arrive unordered. 1. Longest prefix matching rule network prefix: it is the first edge of the network address of some continuous bits. For example, for the address 11101111 11011110 1000000 00000001, the corresponding 8-bit prefix is 11101111, and the 16-bit prefix is 11101111 11011110. Under this rule, the router forwarding table records the correspondence between the network prefix and the output Link Interface. When querying the forwarding table, the target address is still used for matching, but there may be many matching songs. In this case, the most matched bits are obtained as the hit table items, and forward the group based on it. 2. The main function of the router network layer is to move groups from one host to another. This function is mainly implemented by vrouters. A typical router structure is as follows: the input port connects to the input physical link, interacts with the remote data link layer of the link, and completes the search and forwarding functions, to enable the input group to access the appropriate output Link Interface. For a control group, it enters the selector. Switch structure: It connects the router input interface to its output interface. Output port: stores the Group sent to it by the Exchange Structure and sends the group. At the same time, it performs link layer and physical layer functions opposite to the input port. Routing processor: executes the routing protocol and maintains the routing information and forwarding table. 1. The line connection function and Data Link Processing Time Limit of the input port are the physical layer and data link layer related to each input link to the router. The search/forwarding function of the input port is very important for the router's forwarding function. In many routers, it is here to determine the output port to which an incoming group is forwarded through the exchange structure. The selection of the output port depends on the information in the forwarding table. Although the forwarding table is calculated by the routing processor, each input port usually has a copy of the forwarding table shadow and is updated in time. Because the input port has a local copy of the forwarding table, you can make forwarding decisions on each input interface without calling the central selector, this mode can avoid forwarding bottleneck on a node of the router. In a router with limited processing capability of the input port, the input port will forward the group to the central fancy processor, and then it will perform Forwarding Table search and forward the group to the appropriate output port. After a forwarding table is available, the forwarding decision is very simple, that is, the query forwarding table, but because the forwarding table of the main router is large, in addition, we expect that the processing speed of the input port can reach the line speed or that we expect the faster the query speed, the better. Therefore, we need to optimize the forwarding table organization and query. Common methods include: store the forwarding table in a tree structure. Each level of the tree corresponds to a bit in the destination address. If the address bit is 0, search for its left subtree; otherwise, search for the right subtree. With this structure, the destination address of N bits can find the corresponding Forwarding Table item in N steps. The content addressable memory CAM uses a tree structure, which is too slow for the master router. CAM allows an IP address to be handed over to the CAM, then, the CAM returns the content of the forwarding table corresponding to the address within the constant time. Save the recently accessed forwarding table in the cache. After finding the group output port, the group can enter the switch structure. At this time, the group may be blocked, so the group from other input ports may be using an exchange structure. Blocked groups must be queued at the input port. 2. The switch structure uses the switch structure. A group can switch from an input port to an output interface. Three switching technologies: Memory switching: The switching between the input port and the output port is completed under the direct control of the CPU. When a group arrives, the port notifies the selector through interruption. The group is copied from the input port to the processor memory. Then, the selector extracts the destination address of the group, finds the forwarding table, and finds the output port, and copy the group to the cache of the output port. In this mode, the forwarding throughput is limited by the memory speed. PC generally uses this method. Some modern routers also use memory swap, but the difference with PCs is that the processor on the input line card performs the query table and the storage group to the appropriate storage location. Bus switch: the input port directly sends the group to the output interface through a shared bus without intervention from the selector. In this mode, the exchange bandwidth of the router is limited by the bus bandwidth. Through an interconnected network switch: high-end routers generally use this method, which can overcome the bandwidth limit of a single and shared bus. A horizontal switch is an interconnected network composed of 2n bus. It connects n input interfaces and n input interfaces. In this interconnected network composed of vertical and horizontal switches, any two ports have their own dedicated bus, which can overcome the bandwidth limit of a single and shared bus. When using this network, IP address groups with varying lengths are often divided into fixed-size cells, and tags are exchanged through the interconnected network. These cells are assembled into initial groups on the output interface. This method can greatly simplify and accelerate group exchange through the Internet. 3. The output port pulls out the group stored in the output port memory and sends it to the output link. It implements functions opposite to input interfaces on the data link layer and physical layer. 4. queuing may occur on both the input and output ports. As these queues grow, the cache space of the vro may be exhausted, resulting in packet loss. For routers with N input interfaces and N output interfaces, the switch structure speed is defined as the rate at which the switch structure can move packets from the input port to the output port. If the rate of the switching structure is at least N times the rate of the input line, no queuing occurs on the input port, because even if all N ports are receiving packets, the switch structure can also be moved to the output interface. However, for the output interface, if the switching structure rate is N times the line rate, the group that reaches all N input ports will be sent to the same output port in the worst case, in this case, when the output port sends a group, it must receive N groups, which leads to queuing, in this case, the output port queue increases constantly and the memory is exhausted, causing packet loss. Because queuing occurs, the cache size setting is very critical. The empirical method of cache size setting is: cache volume = average round-trip latency * link capacity. 1. A queue occurs on the output port of the group scheduling. An important problem is how the output port sends these queuing groups. The possible methods are as follows: to first serve the FCFS weighted fair queue, it shares a fair output link between different end-to-end connections of groups that have queue waiting for transmission. 2. Another problem in queue management is whether to discard a group or discard a queued group if there is not enough cache to cache it to free up space for the new group. The related policies are usually AQM, an active queue management algorithm. The Random Early Detection Algorithm RED is a common algorithm. The idea is to maintain a weighted average value for the length of the output queue: if the average queue length is smaller than the minimum threshold min, when the Group arrives, the group is directly put into the queue. If the average queue length is greater than the minimum threshold value max, the group is marked or discarded when the group arrives. Otherwise, the Group is marked or discarded with a certain probability. This probability value is generally a function related to the average queue length, min, and max. 3. If the HOL structure cannot be fast enough to allow all groups to pass through it without delay, a group queue will also appear on the input port. Assume that all links have the same speed. One group can move the group from any input port to a given output port at the same time as an input link receives a group. The input queue follows the FCFS operation. Assume that the input port each number in indicates a group, the numeric value identifies the output port to be forwarded, and the input and output ports start from subscript 1 and start from top to bottom. In the scenario shown in the figure, assuming that the switching structure decides to move the group in Input Port 1 to its output port, the group in Input Port 3 must wait. Further, because the queue of the input port adopts the FCFS working mode, the group with the destination port 2 in the input port 3 must also wait, although there is no other input port competing with the output port 2 at this time, this is called Head-Of-the-Line blocking, that is, HOL blocking. Studies show that, due to HOL blocking, as long as the arrival rate of the group on the input link reaches 58% of its capacity, the input queue will increase wirelessly under certain assumptions, resulting in packet loss. 3. The host in the routing algorithm network is usually directly connected to a vro, which is the default vro of the host. It is the first hop router of the host. When a host sends a group, it first sends the group to the default router. at the receiving end, the default router of the target host forwards the group from other hosts on the network to the target host, therefore, the routing from the source host to the target host in the network is the default router (which can also be the source router) from the source host to the destination host (which can also be the destination router). The purpose of the routing algorithm is very simple: Given a group of routers and the link connecting the router, the routing algorithm needs to find a "good" path between the source router and the destination router. A good path usually refers to the path with the lowest cost. Because the topology composed of routers in the network is a typical graph structure, we can use graphs to study the routing problem. In the network diagram, nodes represent routers, and edges represent links between routers. Assign the edge a value indicating the cost (the cost can be link speed, money, line length, etc ), then, the goal of the routing algorithm is to find the path with the lowest cost between the given two points in the figure. 1. the classification of the routing algorithm is based on whether the routing algorithm is global or local. the routing algorithm can be divided into: global routing algorithm: complete, global Network information is used to calculate the lowest cost path from the source to the destination. This algorithm is generally called the link state algorithm (LS), because it must know the cost of each link in the network. Distributed routing algorithm: calculates the lowest cost path in iterative and distributed mode. No node has complete information about the cost of all network links, and each node can start to work only with the cost information of the directly connected link. Then, through Iterative Computing and information exchange with adjacent nodes, a node gradually calculates the lowest cost path to the target node or a group of target nodes. It can also be classified based on whether the algorithm is static or dynamic: Static Routing Algorithm: with the passage of time, route changes are very slow, and adjustments are made with human intervention. Dynamic Routing Algorithm: it can change the routing path when the network traffic load or topology changes. Dynamic Routing Algorithms can run cyclically or directly when the topology or link costs change. It can also be divided based on whether the algorithm is load-sensitive or load-slow. Load-sensitive algorithms: the link fee dynamically changes to reflect the current congestion level of the underlying link. The current Internet routing algorithms are slow. 2. Link Status routing algorithm the link status algorithm requires the network topology and all link fee information as input. In practice, this can be done by letting all nodes broadcast link status groups to all other routers in the network. Each link status group contains the characteristics and costs of the links it connects. The broadcast result is that all nodes have the same complete view of the network. Therefore, each node can run the LS algorithm to calculate the same minimum cost path set. Based on graph theory, Dijkstra algorithm can be used to calculate the shortest path. It calculates the lowest cost path from a node to all other nodes in the network. It is an iterative algorithm, after k iterations, you can obtain the shortest path that is triggered from the source node to k nodes. (The Prim algorithm can also be used to calculate the shortest path between two nodes in a graph. based on graph theory, it is suitable for graphs with sparse edges .) The complexity of the algorithm is O (n2 ). The LS algorithm may cause network fluctuations. For example, if the link fee in the network is equivalent to the load on the link, the initial status is: both node X and Z send traffic to one unit of destination W and select the link directly connected to W. Y has a traffic e with a destination of W, and takes the link that reaches W through X. At this time, the non-zero-cost links include: the link fee between X and Z is 1 + eY, the link fee between X is eZ, And the link fee between Y is 1 when LS is running, node X finds that the cost of the link directly connected to W is 1 + e, while the total cost of the path Y-> Z-> W is 1, therefore, it chooses to go through the link node Y and finds that the cost for going X-> W is 1 + e, while the cost for going X-> W is 1, select the path Z-> W. Node Z routing remains unchanged. At this time, the non-zero-cost links include: the link fee between X and Y is eY, And the link fee between Z is 1 + eZ, the link fee between W is 2 + e. When LS is run, node X finds that the cost of the link directly connected to W is 0, the total cost of the path Y-> Z-> W is 2 + e, so it chooses to go through the path node Y that is directly connected to W to find that, the cost of X-> W is 0, and the cost of Z-> W is 2 + e. Therefore, select X-> W node Z to discover, the cost of the link that uses Y-> X-> W is 0, while the cost of the link that directly connects to W is 2 + e, therefore, it selects the path Y-> X-> W. At this time, the non-zero-cost links include: the link fee between Z and Y is eY, And the link fee between X is 1 + eX, the link fee between W is 2 + eLS and the algorithm continues to run. The solution is to ensure that not all routers run the LS algorithm at the same time. The method is to randomly select the time when the link announcement is sent for each vro. 3. distance vector routing algorithm Distance vector Algorithm DV (Distance-vector) is an iterative, asynchronous, and distributed algorithm: distributed: each node collects some information from multiple directly connected neighbors, performs computation, and sends the result back to a neighbor. Iterative: This process continues until there is no more information to exchange between neighbors. Asynchronous: the operations on each node do not need to be consistent. DV algorithm, using the Bellman-Ford equation: dx (y) = minv {c (x, v) + dv (y)}, dx (y) the minimum fee path from node x to node y. C (x, v) means the route fee from node x to its neighbor v. The equation means that the lowest cost path from node x to node y is equal to the one with the smallest c (x, v) + dv (y) among all neighboring v. In the DV algorithm, an important contribution of Bellman-Ford is that the v node with the minimum value is the next hop node when the current node forwards data to y. When forwarding data to y, you only need to send the group to node v. Algorithm idea: For nodes in network N, Dx = [Dx (y): y belongs to N] is the distance vector of node x, this vector is the cost estimation vector of all other nodes y from x to N. Each node x maintains the following optional data: For each neighbor v, the cost from x to the directly connected neighbor v is c (x, v ). The distance vector of node x, which contains the estimated cost of all destinations from x to N, the distance vector of each neighbor of the node x, that is, for each neighbor v of x, Dv = [Dv (y): y belongs to N] DV algorithm, each node sends a copy of its distance vector to each neighbor from time to time. When node x receives a new distance vector from its neighbor v, it stores the distance vector of v and updates its own distance vector according to the Bellman-Ford equation. If the distance vector of node x changes due to this update, node x sends its updated distance vector to each of its neighbors. The process of receiving the update distance vector from the neighbor, recalculating the route table items, and notifying the neighbor of the lowest cost path to the destination has changed will continue to know that no update message is sent. DV algorithms are used for RIP and BGP over the Internet. 1. distance Vector Algorithm: When the node that runs the DV Algorithm for Link fee change and link failure detects that the link fee from the node to its neighbor changes, the distance vector is updated, if the minimum cost path changes, it notifies the neighbor of its new distance vector. When the link cost is reduced, the DV algorithm can get the minimum fee after the change. However, if the link fee increases, there will be a problem, as shown in the topology: Before the link fee changes, as shown in the network topology, Dy (x) = 4, Dy (z) = 1, Dz (y) = 1, Dz (x) = 5, at t0 time, y detects link fee changes (increased from 4 to 40 ). Y updates the device's lowest cost path. According to the Bellman-Ford equation, the calculated value is 6. However, it is obviously incorrect to observe the network topology at this time. The cause of this phenomenon is that, when the distance vector of the y Updater is used, the distance vector of the z advertised to it by z is used, Dz (x) = 5, but obviously, Dz (x) it depends on Dy (x). It is obviously an incorrect value after Dy (x) changes. More importantly, a routing loop will appear at this moment: in order to reach x, y selects the path through z, and z selects the path through y. This routing result is that the group cannot reach the destination. Y calculates a new minimum cost to x, so it notifies zz of the new distance vector at t1 to update the distance vector after receiving the new distance vector of y, it calculates the new Dz (x) = 7. This process continues until a correct value of Dy (x) = 40, Dz (x) is calculated) = 41 from the above process, we can see that when DV algorithm is used, bad messages are delivered slowly. In addition, the above process only increases the cost of one link. If the cost of multiple links increases, the so-called endless counting problem may occur. 2. Distance Vector Algorithm: the addition of toxic reverse toxicity can solve the loop condition in the above-mentioned network topology. The idea is: if z chooses the path through y, z will tell y that the distance from z to x is infinite. Toxicity reversal can only solve this special loop problem. If the loop involves three or more nodes, it will be powerless. LS and DV algorithms compare the DV algorithm. Each node only exchanges information with its direct neighbors, but it provides its neighbors with the ability to reach the network through it (what it knows) estimate the minimum cost of all other nodes. In the LS algorithm, each node exchanges information with all nodes, but it only charges fees for links directly connected to other nodes. Compared message complexity: The LS algorithm requires each node to know the cost of each link in the network. Therefore, it needs to send O (| N | E |) packets, in addition, whenever the cost of a link changes, a new link fee must be sent to all nodes. The DV algorithm requires that packets be exchanged between two direct neighbors during each iteration. When the link fee changes, the changed link fee will be transmitted only when the new fee changes the minimum fee of the node connected to the link. Convergence Speed: The LS algorithm is an O (n2) algorithm, while the DV algorithm is slow in convergence and may encounter a routing loop during convergence. Robustness: because each node in the LS algorithm independently computes its own route, this provides robustness to a certain extent. The incorrect information of a node in the DV algorithm will spread to the entire network. 4. reason for hierarchical routing: the Internet is composed of hundreds of millions of Hosts. If hierarchical routing is not used, the router needs to store the routing information, which requires a large amount of memory, in addition, if the LS algorithm is used, it is expected that the network will be drowned in the LS broadcast and cannot work. If the DV algorithm is used, it can be expected that it will not converge at all. Management autonomy: Some organizations may tend to manage their own networks as they wish to hide their internal network appearances. To solve these two problems, the router is organized into a self-made system (). Each AS is composed of a group of routers under the same management control. routers within the same AS run the same routing algorithm and have information between them. The routing algorithm running in a self-made system is called the internal routing protocol of the self-made system. One or more routers in AS have another task, that is, they are responsible for forwarding the group to the destinations outside of this AS. These routers are called Gateway routers. When AS has only one Gateway Router, routers in AS can easily forward the destination group that does not belong to this AS to the Gateway Router (because the routers in AS know the shortest path to the Gateway Router ), then, the Gateway Router forwards the data to the AS external server. However, when AS has multiple gateway routers, AS needs to know which destinations can be reached by other AS connected to its own Gateway Router in this AS internal transmission of these accessible information these two tasks are completed by the routing protocol between autonomous systems. All Internet AS instances run the same routing protocol between autonomous systems. Each vro receives information from an internal AS routing protocol and an AS routing protocol, and uses this information to configure its own forwarding table. When an AS instance knows that it can reach a destination through its own Gateway Router through the routing protocol between autonomous systems, how does the AS internal router add its own route to this destination? The common strategy is hot potato routing (hot potato routing). The idea is to let the AS out of the Group AS soon AS possible, that is, to forward the group to other AS soon AS possible, the router selects a Gateway Router with the following features, and then adds a route according to the path from the router to the Gateway Router: this Gateway Router can reach this destination. This Gateway Router has the lowest cost path to this router. When an AS knows a destination from an adjacent, it can advertise the route information to some of its other neighbor's AS, but whether to advertise what is an AS management policy issue. Depends on the AS configuration management policy. Therefore, in an AS, all routers run the same internal routing protocol. AS runs the same routing protocol between autonomous systems between different AS systems. The AS internal router only needs to know the AS internal router, The AS management organization can run any internal routing protocol it chooses.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.