TRAFFIC-CONTROL:TC Traffic Management Introduction--turbolinux Knowledge Base

Source: Internet
Author: User
Tags knowledge base

I. INTRODUCTION to TC

TC, or traffic control, as the name implies, TC is a tool for the flow control of Linux. With TC, you can control the rate at which the network interface sends data.
Each network interface (for example: ETH0,PPP0) has a queue that manages and dispatches data to be sent. TC works by setting different types
Network interface queue, which changes the rate and priority of packet transmission, to achieve the purpose of traffic control.

Two. Enable TC function

If you want to use the TC feature, make sure you have "ip:advanced router" and "Ip:policy routing" in your kernel configuration.
and adds the relevant sub-options. Once the kernel has been recompiled, the kernel network queue can be configured with the TC command.

Three. Type of queue in TC

TC is the traffic that controls the network interface by setting different queue types and properties.
The queues supported in the Linux kernel are:
TBF (token bucket flow token bucket filter), Pfifo_fast (third Band first out queue FIFO),
SFQ (Stochastic Fairness Queueing Random Fair queue), HTB (Hierarchy token bucket tiered token bucket), etc.

Each queue has different uses and attribute values:

1.pfifo_fast (FIFO queue)


This queue has 3 so-called "channels". FIFO rules are applied to every channel. And: If there are packets waiting to be sent on Channel 0, the packet of Channel 1 will not be processed,  
1 Channel and 2 channel relationship also . 
pfifo_fast only plays the role of scheduling and does not control data traffic.  

# IP link list1:lo: <LOOPBACK,UP> MTU 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:002 : eth0: <BROADCAST,MULTICAST,UP> MTU Qdisc pfifo_fast Qlen, link/ether 00:40:ca:66:3d:d2 BRD FF:FF:FF:FF : ff:ff3:sit0: <NOARP> MTU 1480 Qdisc noop link/sit 0.0.0.0 BRD 0.0.0.0


2. Token bucket filter (TBF)

The token bucket filter (TBF) is a simple queue rule that handles packet passing at a predetermined rate, but allows for a brief burst of traffic
Over the set rate.
The TBF is very precise and has very little impact on the network and the processor. The TBF is implemented in a buffer (bucket) with some data called "tokens".
It processes network packets at a specific rate. The most important parameter of a bucket is its size, which means it can store the amount of token data.
Each token collects a packet from the data queue and is then removed from the bucket. This algorithm is associated to two streams-token flow and data flow,
So we get 3 kinds of scenarios:
The data flow reaches TBF at a rate equal to the token stream. In this case, each incoming packet can correspond to a token and then pass through the queue without delay.
The data stream reaches TBF at a rate that is less than the token stream. Packets that pass through the queue consume only a portion of the token, and the remaining tokens accumulate in the bucket until the bucket is filled.
The rest of the tokens can be consumed when the traffic needs to be sent at higher than the token flow rate, in which case bursts of transmission occur.
The traffic reaches TBF at a rate greater than the token stream. This means that the tokens in the bucket will soon be exhausted. Causes the TBF to break for a period of time, called "the more Limited".
If the packet continues to arrive, a packet loss occurs.
The last scenario is important because it can be used to reshape the data through the filter's rate.
The accumulation of tokens can result in a short burst of data for a shorter period of time without the need to drop packets, but the longer the limit will result in a delay in transmission until the packet is dropped.
&LT;TBF parameter Description >

Limit/latency
Limit determines how much data (in bytes) waits for the available tokens in the queue. You can also specify this parameter by setting the latency parameter,
The latency parameter determines the maximum wait time for a package to wait for transmission in TBF. The latter calculates the size, rate, and peak rate of the bucket.
These two parameters are independent of each other.

Burst/buffer/maxburst
The size of the bucket, in bytes. This parameter specifies the maximum number of tokens that can be used immediately. Typically, the larger the bandwidth you manage, the more buffers you need.
On Intel systems, the rate at which the 10mbit/s requires at least 10k bytes of buffer to reach the desired rate.
If your buffer is too small, it will result in the arrival of tokens that are nowhere to be placed (full buckets), which can lead to potential drops.

Mpu
A 0-length package does not consume bandwidth. For example, Ethernet, the data frame will not be less than 64 bytes. MPU (Minimum Packet Unit, Min. group unit)
Determines the minimum consumption of tokens. The default value is 0.

Rate
Speed lever. See limits! above
If you want to set the peak rate, use the following parameters:

Peakrate
The maximum rate of the token bucket.

Mtu/minburst
When Peakrate is set, the MTU value of the interface is also set to improve accuracy. If you need to set up peakrate, and allow bursts of large data transfer
This value can be set to a larger number. Minburst is set to 3000 bytes, you can provide 3mbit/s peakrate.
The token bucket filter is suitable for networks that require a precise set rate, and it does not dispatch packets, only traffic control.

3. SFQ Random Fair queue

SFQ (Stochastic Fairness Queueing, Random fair queue) is a simple implementation of the Fair queue algorithm family. It's no other way to be precise,
It achieves a high level of fairness while requiring little computational capacity. The key word for SFQ is "session" (or "stream"), primarily for a TCP session or UDP
Flow. The traffic is divided into a number of FIFO queues, each of which corresponds to a single session. Data is sent in a simple round-robin manner, and each session is sent sequentially
Opportunity.
This is a fairly fair way to ensure that every session is not overwhelmed by other sessions. SFQ is called "random" because it is not really a session for every
Instead of creating a queue, a hashing algorithm is used to map all the sessions to a limited number of queues.
Because hashing is used, it is possible for multiple sessions to be allocated in the same queue, which requires the opportunity to share the bundle, which is the shared bandwidth. In order not to make this effect too obvious,
SFQ will frequently change the hashing algorithm in order to control this effect within seconds.
Only when your exit network card is really packed, SFQ will work! Otherwise there will be no queues in your Linux machine, and SFQ will not work.
SFQ does not re-adjust the rate of traffic, only the scheduling of packets.

&LT;SFQ parameter Description >
Perturb
How many seconds after the hash algorithm is reconfigured. If you cancel the setting, the hashing algorithm will never be reconfigured (this is not recommended). 10 seconds should be a suitable value.
Quantum
The number of bytes that a stream must transmit at least before it switches to the next queue. The save is set to the length of a maximum package (the size of the MTU). Do not set this value below mtu!

Four. TC Configuration Example 1. Configuration of a single queue.

(1). Set a TBF (token bucket filter) queue on eth0, network bandwidth 220kbit, 50ms latency, and buffer of 1540 bytes.

# TC Qdisc Add dev eth0 root TBF rate 200kbit Latency 50ms Burst 1540

To view the queue settings on the Eth0 interface

# IP link list1:lo: <LOOPBACK,UP> MTU 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:002 : eth0: <BROADCAST,MULTICAST,UP> MTU qdisc TBF Qlen, link/ether 00:40:ca:66:3d:d2 BRD FF:FF:FF:FF:FF:FF3 : sit0: <NOARP> MTU 1480 Qdisc noop link/sit 0.0.0.0 BRD 0.0.0.0# tc Qdisc ls Dev eth0 # View the queue rules on eth0 qdisc TBF 80 01:rate 200Kbit burst 1539b lat 48.8ms

At this point, the rate at which another machine downloads files from this server is only about 20K.
Delete this queue rule

# tc Qdisc del dev eth0 root TBF rate 220kbit Latency 50ms Burst 1540

(2). Set a SFQ (random fair queue) on the eth0. The algorithm is reset every 10 seconds.

# TC Qdisc Add dev eth0 root sfq perturb 10

To view the queue settings on the Eth0 interface

# IP link list1:lo: <LOOPBACK,UP> MTU 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:002 : eth0: <BROADCAST,MULTICAST,UP> MTU Qdisc SFQ Qlen, link/ether 00:40:ca:66:3d:d2 BRD FF:FF:FF:FF:FF:FF3 : sit0: <NOARP> MTU 1480 Qdisc noop link/sit 0.0.0.0 BRD 0.0.0.0# tc qdisc ls dev eth0qdisc sfq 8002:limit 128p Quantum 1514b perturb 10sec

At this point, the rate at which another machine downloads files from this server does not change. Because the SFQ is only scheduled, not flow control.


This article is from the "Professor" blog, please be sure to keep this source http://professor.blog.51cto.com/996189/1571015

TRAFFIC-CONTROL:TC Traffic Management Introduction--turbolinux Knowledge Base

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.