Ethernet access technology is worth learning a lot. Here we mainly introduce the practical application of Ethernet access technology. QoS service assurance includes satisfying the requirements of various telecom-level network performance indicators. QoS applications are mature within the lan. when applications are extended to public telecommunication networks, more strict end-to-end QoS and SLA (Service Level Agreement) are required.
But traditional Ethernet does not have a mechanism to ensure end-to-end jitter, latency, and packet loss performance, it cannot provide the network-wide standards, QoS specifications required for real-time services, and the user authentication and billing statistics required for multi-user shared nodes and networks.
Early Ethernet access technology mainly carried data services in the LAN. Data Services were characterized by low latency. TCP retransmission can tolerate the loss of a small number of packets over Ethernet, therefore, differentiated service quality assurance is not required. However, because the carrier-level Ethernet access technology needs to carry comprehensive services, it is difficult for this Best effort Service, which does not distinguish traffic types, to ensure the service quality. The ISCOM operation-level Ethernet switch implements QoS through Diff-Serv (Differentiated Service Architecture). The specific implementation process includes stream classification, ing, congestion control, and queue scheduling.
(1) stream classification: the Ethernet access technology can be used to differentiate business flows based on MAC addresses, VLAN IDs, IP addresses, and TCP/UDP port numbers, or even the first 128 bytes of user data packets.
(2) ing: maps QoS parameters of data streams to the ip tos field, mpls cos domain, or 802 based on certain policies. 1 p field, usually divided into EF (accelerated forwarding, corresponding to businesses with strong real-time performance), several AF (guaranteed forwarding, corresponding to businesses with sensitive packet loss and poor real-time performance) and BF (best effort, corresponding to common IP services );
(3) congestion control: Different congestion control algorithms are applied to data streams according to different service requirements. When network nodes are congested, a few packets can be discarded differently;
(4) queue scheduling: To ensure latency and latency jitter, various scheduling algorithms, including strict priority (SP) algorithm and Weighted Round Robin (WRR) algorithm, are required, SP is used for businesses with strict latency requirements, and WRR is used to allocate bandwidth among multiple businesses based on a certain weight.