QoS learning notes

Source: Internet
Author: User
Tags map class

QoS learning notes marking: 1. the class-map.class-map defined [match-all/match-any] {map-name} is not typed by default. define the match command matchmatch access-group {NO} match input-interface {interface} match class-map {map-name} class-map nested match source-address {mac-address} source mac address match destination-address {mac-address} destination mac address match vlan {vlan-ID} match ip dscp {DSCP} match ip precedencc {precedence} match protocol {protocol} Based on NBARRouter (config) Class-map FOORouter (config-cmap) # match? Access-group Access groupany Any packetsclass-map Class mapcos IEEE 802.1Q/ISL class of service/user priority valuesdestination-address Destination addressinput-interface Select an input interface to matchip IP specific valuesmpls Multi Protocol Label switching specific valuesnot Negate this match resultprotocol Protocolqos-group Qos-groupsource-address Source address3. set policy-mappolicy-map {polic Y-name} 4. call class-mapclass-map {map-name} 5. set ip dscp {DSCP} set ip precedence {PRECEDENCE} set cos {COS} priority {Kbps | percent PERCENT} [bc] to define the bandwidth of priority traffic and bandwidth of Burst Traffic {Kbps | percent PERCENT} defines reserved bandwidth random-detect enable WREDpolice {cir bc be} conform-action {action} exceed-action {action} [violated-action {action}] use the token bucket speed limit queue-limit {PACKETS} defines the maximum number of data PACKETS in the queue. service-policy {policy-name} calls other policies for nested shape {Average | peak} {CIR [BC] [BE]} shaping drop6. call the policy-mapservice-policy [input | ouput] {POLICY-NAME} command in interface mode to View Details: show policy-map [policy-name] show policy-map interface [INTERFACE] show class-map [class-name] show ip nbar pdlmshow ip nbar port-map displays the protocol used by NBAR port nbing NBAR application: restrictions: 1. fast Ethernet Channel 2. tunnel interface or encrypted interface 3.SVI( switch Virtual Interface) 4. dialing interface 5. before using multi-link PPP (MLP), run the following command: ip cefclass-map {name} match protocol... Ip nbar pdlm flash: // bittorrent. pdlm loads bittorrent. pdlm to the router flash memory (copy pdlm to flash in advance) match procotol http url "*. jpeg | *. jpg "(matching url with jpeg and jpg connections) match procotol http url" *. gif (matching URLs with gif connections) congestion management WFQ: features: 1. based on the stream (5 elements) classification, the number of queues N can be configured 2. bandwidth is allocated based on the IP address priority after the team leaves. The lower the priority, the lower the bandwidth. 3. when other queues are idle, they can seize their bandwidth and return the bandwidth when there is traffic. 4. is the default configuration command for the serial interface below Mbps: fair-queueshow queueing fairshow queue [interface] PQ in interface mode: disadvantage: 1. only static configurations are allowed and cannot adapt to network topology changes. tunnel Interface 3. data Classification Card, slow than FIFO configuration: 1. define priority queue, based on the protocol and the inbound interface, you can use the PROTOCOL riority-list {list-number} protocol {PROTOCOL-NAME} {high | medium | normal | low} Based on the inbound interface riority-list {list -number} interface {interface} {high | medium | normal | low} 2. defines the default priority queue. unclassified datagram is sent here. The default level is normalpriority-list {list-number} default {high | medium | normal | low} 3. defines the number of data entries in each queue, from high to low. The default value is 20, 40, 60, 80priority-list {list-number} queue-limit {high-limit medium-limit normal-lim It low-limit} 4. use the priority-group {list-number} command on the interface to view the priority queue: show queue [interface] show queueing priorityRTP (Real Time Protocol) udp packets with an even port number can be throttled or configured to avoid exceeding the specified bandwidth: ip rtp priority {starting-rtp-port-number-range} {bandwidth} max-reserved-bandwidth PERCENTshow queue [interface] debug priorityCRTP (compression Real-Time Protocol ): there are three compression methods for compressing ip/udp/rtp headers: link compression, payload compression, and header compression layer 2 header-TCP/IP Header-payload 1. link compression: the entire group is compressed. Compression, including header and payload. Independent from the protocol. Applicable only to point-to-point links. Two algorithms: Predictor and STAC2. payload compression: Only compressing the data part does not affect the header. Applicable to frame-relay or ATM networks. 3. TCP/IP Header Compression: for Protocol-specific small groups of several bytes of data. Configuration: Enable stac compression for the HDLC, LAPB: compress [predictor | stac | mppc], and ppp: ip compress [predictor | stac] on the frame-relay point-to-point interface or sub-interface: frame-relay payload-compress enabling crtp: 1. ip rtp header-compression [passive] If passive is not enabled, all RTP data streams are compressed. If passive is enabled, only when the RTP group entering the port is compressed, the software can compress the RTP group that leaves this interface. ip rtp compression-connections {NUMBER} changes the NUMBER of CRTP compressed records. The default value is 16. 3. enable tcp header compression: ip tcp header-compression [passive] If passive is not enabled, you must specify the keyword active. /UDP/RTP Header for compressing crtp in ip tcp compression-connections {NUMBER} frame-relay: Enabling CRTP on the physical interface, then, on its sub-interface, it will inherit the feature frame-relay ip rtp header-compression [passive] frame-relay ip rtp compression-connections {NUMBER} and only enable CRTP for a specific PVC: frame-relay map ip {ip-address} {dlci} [broadcast] rtp header-compression [active | passive] [connections number] view command: show ip rtp header-compression [interface] [detail] show frame-relay ip r Tp header-compression [interface] WRR (weighted round-robin) is a queue scheduling mechanism that allocates bandwidth based on the weights of each outbound queue. the weights are proportional to the bandwidth. WRR is used only when congestion occurs. No bandwidth is allocated if no congestion occurs. Step: 1. Enable mls qos2. go to interface mode. 3. Use COS to distribute data packets to different queues. Wrr-queue cos-map QUEUE-id throshold COS1... COSn is used to configure the cos ing between cos values and the outbound queue. When the queue is numbered, it starts from low priority to strict priority. Configure strict priority queue: wrr-queue priority-queue4. configure the WRR queue weight wrr-queue bandwidth WEIGHT1 WEIGHT2 WEIGHT3 WEIGHT4 to calculate each queue by: example: wrr-queue bandwith 1 2 3 4 bandwidth: queue 1: weight 1/total weight = 1/105. defines the length ratio of the transmission queue, the value is 1-100% wrr-queue-limit LOW-PRIORITY-QUEUE-weight medium-PRIORITY-QUEUE-weights high-PRIORITY-QUEUE-WEIGHTS because of LOW latency, the data size is small, so there is no need for a large cache area. Most buffers are often used in low-priority queues. Congestion prevention 1. tail discarding is equal to all communication streams. This will cause TCP global synchronization wrr-queue threshold QUEUE-ID THR1 % 100% THR1 % To the extent to which the output queue is filled when the communication stream is discarded, the last one is 100%, and the last one is discarded. 2. the difference between WREDWRED and RED is that the former introduces the IP priority DSCP value to distinguish the discard policy. You can set different queue lengths, queue thresholds, and discard probabilities for different IP priority DSCP values. In addition, RED only applies to TCP traffic and determines whether to discard the average value of the transmission speed of the queue data stream to prevent unfair treatment of Burst Traffic. The conflicts between WRED and LLQ are often used together with WRR. WRED can be configured on the interface or the policy, RED for precedence or RED for DSCP value. Of course, you can only select one of the two. (1) DSCPrandom-detect dscp-basedrandom-detect dscp {DSCP} {min max mark} (2) use ip precedencerandom-detectrandom-detect precedence {PRECEDENCE} {min max mark} WRED with WRR: wrr-queue random-detect min-throshold QUEUE-ID THR1 % [THR2% [THR3 %...] wrr-queue random-detect max-throshold QUEUE-ID THR1 % [THR2% [THR3 %...] min-throshold indicates the maximum amount of data filled when some packets are discarded. max-throshold indicates the maximum amount of data filled when all packets are discarded. example: int G1/1wrr-queue ba Ndwidth 50 75wrr-queue queue-limit 100 50wrr-queue random-detect min-throshold 1 50 70wrr-queue random-detect max-throshold 1 75 100wrr-queue cos-map 1 0 region cos-map 1 2 region cos -map 2 1 4wrr-queue cos-map 2 2 6priority-queue cos-map 1 1 5 7rcv-queue cos-map 1 1 0switchport explanation: there are two queues in total. When queue 1 is filled with 50% and 70%, the switch performs WRED (that is, begins to discard) on packets mapped to Gate 1 and Gate 2 respectively ), when queue 1 is filled with 75% and 100%, the switch discards all packets mapped to Gate 1 and Gate 2. Note: queue 2 does not adopt WRED. Stream-based WRED (used in combination with WFQ) has a low probability of discarding a stream, and a large probability of discarding a stream protects a small stream. Command: 1. Enable stream-based WRED. Random-detect flow2. set the average depth factor value. The value must be a power of 2 and the default value is 4. optional: random-detect flow average-depth-factor {scaling-factor}. this parameter is used to change the proportional factor of a multiplication. changing the queue size is actually changing the queue length. 3. set the number of flow-based WRED data streams. The default value is 256random-detect flow count {number}. The flow of traffic policy qos: 1. stream-based or class-based classification; 2. combine the token bucket for locking or shaping (CAR or GTS); 3. congestion avoidance (tail discard or WRED); 4. congestion Management (various queue mechanisms); 5. team out. Where is the tag ?? (The tag can be performed during the CAR operation, or the CAR can be re-tagged) the CAR (Committed Access Rate) uses the token bucket TC for traffic control. After classification, traffic that does not require Throttling is directly sent, and traffic that requires throttling must go through the token bucket. Traffic can pass only when the token bucket contains a token. If there is not enough token, either the traffic is directly discarded or cached, and then sent out after enough token. CAR can also be used for mark or remark (that is, it can be used to set ip priority or reset ip priority) CAR can set different traffic characteristics and marking characteristics for different categories of packets, that is, you can perform a CAR for each class. The CAR strategy can also be processed in tandem. For example, the total traffic is limited once, and then a small range of speed limit is imposed on each category. A car is generally used on a VBR. You can set multiple CAR policies on one interface. data packets match multiple CAR policies in sequence. If no match exists, data packets are forwarded by default. The use of CAR has the following restrictions: 1. Only IP traffic speed limit is supported; 2. fast EtherChannel is not supported; 3. Tunnel interface is not supported; 4. isdn pri interface is not supported. Command: rate-limit {output | input} {cir bc be} conform-action {action} exceed-action {action} note: the unit of CIR is bit/s; the unit of BC and BE is byte/s. The condition of conform-action is that when the data to be sent is smaller than the normal emergency (bc. Exceed-action refers to the time when the data to be sent is larger than the normal burst, and smaller than the maximum burst (be. Action options are as follows: continue continues to execute the next CAR statement drop discard packet tranmsit forward packet set-prec-continue {precedence} set IP priority and continue to execute the next CAR statement set-prec-transmit {precedence} set IP priority and forward the data packet set-dscp-continue {dscp} to set the dscp value, and continue to execute the next CAR statement set-dscp-transmit {dscp} to set the dscp value and forward the data packet. the traffic is CAR, the following configuration can be used for CAR extension for a specific traffic, ip precedence, dscp value, or MAC address. 1. perform CARrate-limit {output | input} [dscp] {car bc be} conform-action {action} exce for the dscp Value Ed-action {action} 2. perform CARrate-limit {output | input} access-group {acl num} {car bc be} conform-action {action} exceed-action {action} 3 for the ACL. perform CARrate-limit {output | input} access-group rate-limit {acl num} {car bc be} conform-action {action} exceed-action {action} speed limit for the speed limit ACL ACL is only a call relationship: access-list rate-limit {acl num} {precedence | mac-address} can match the priority or MAC address. Command: 1. view the speed limit ACL: show access-lists rate-lim It [ACL] 2. view the speed limit information of the interface: in show interfaces [interface] rate-limitpolicy map, the options for the CAR operation police {cir bc be} conform-action {action} exceed-action {acion} violated-action {acion} action are the same as those for the CAR operation.. Traffic shaping is usually completed through the buffer zone and the token bucket. When the transmission speed of the message is too fast, the buffer is first cached, these buffered packets are evenly sent under the control of the token bucket. The technology used is GTS (General Traffic Shaping ). The main difference between GTS and CAR is that when the CAR is used for packet traffic control, packets that do not meet the traffic characteristics are discarded, while GTS caches packets that do not meet the traffic characteristics, this reduces packet discarding and satisfies packet traffic characteristics. If the message does not require GTS, it is directly sent without the processing of the token bucket. When GTS is used because of lack of sufficient tokens, GTS extracts packets from the queue for sending at a certain period of time. Each sending is compared with the number of tokens in the token bucket. Until the number of tokens in the token bucket is reduced to the number of messages in the queue that cannot be sent or all messages in the queue are sent. Generally, shaping is performed at the egress of the vro, and routing is performed at the ingress. Command: 1. basic GTS: traffic-shape rate {cir bc be} 2. ACL-based GTS: traffic-shape group {ACL} {cir bc be}: 1. view GTS configuration information: show traffic-shape [interface] 2. view GTS statistics: show traffic-shape statistics [interface] Implementation of GTS on Frame Relay 1. enable GTS on the Interface: traffic-shape rate {CIR [Bc [Be]} 2. when the interface receives a notification of backward Explicit Congestion (BECN), the minimum traffic rate is estimated: traffic-shape adaptive {CIR} 3. the forward Explicit Congestion Notification (FECN) is used as the response of BECN. optional. the GTS implementation of traffic-shape fecn-adaptGTS on policy map can also be set. You can use CBWFQ When configuring GTS. Perform the following steps to configure a classification-based traffic shaping: 1. the CIR, Bc, and Be (config-pmap-c) # shape {average | peak} {CIR [Bc] [Be]} average that defines the average value and the peak value indicates the average value, peak is the peak value 2. defines the buffer limit. The default value is 1000. optional: (config-pmap-c) # shape max-buffers {number-of-buffers} 3. apply the CBWFQ policy. Optional: (config-if) # service-policy output {policy-name}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.