QoS using IFB and ingress qdisc

Source: Internet
Author: User
IFB   The intermediate functional block DeviceIs the successor to the imq iptables module that was never integrated. advantage over current imq; cleaner in Particle in SMP; with a _ lot _ less code. old dummy device functionality is preserved while new one only kicks in if you use actions. to use an IFB, you must have IFB support in your kernel (configuration option config_ifb ). assuming that you have a modular kernel, the name of the IFB module is 'ifb' and may be loaded using the command Modprobe IFB(If you have modprobe installed) or Insmod/path/to/module/IFB. IP link set ifb0 up IP link set ifb1 up

By default, two IFB devices (ifb0 and ifb1) are created

IFB usage

As far as I know the reasons listed below is why people use imq. It wocould be nice to know of anything else that I missed.

  • Qdiscs/policies that are per device as opposed to system wide. imq allows for sharing.
  • Allows for queueing incoming traffic for shaping instead of dropping. I am not aware of any study that shows processing ing is worse than shaping in achieving the end goal of rate control. I wocould be interested if anyone is experimenting. (Re shaping vs grouping ing: the desire for shaping comes more from the need to have complex rules like with htb)
  • Very interesting use: If you are serving P2P you may wanna give preference to your own localy originated traffic (when responses come back) vs someone using your system to do bittorent. so Qosing Based on State comes in as the solution. what people did to achieve this was stick the imq somewhere prelocal hook. I think this is a pretty neat feature to have in Linux in general. (I. e not just for imq ).

But I wont go back to putting netfilter hooks in the device to satisfy this. I also dont think its worth it hacking IFB some more to be
Aware of say L3 info and play IP rule tricks to achieve this.

Instead the plan is to have a contrack related action. this action will selectively either query/create contrack state on incoming packets. packets cocould then be redirected to IFB based on what happens (e.g. on incoming packets); if we find they are of known state we cocould send to a different queue than one which didnt have existing state. this all however is dependent on whatever rules the admin enters.

At the moment this function does not exist yet. I have decided instead of sitting on the patch to release it and then if theres pressure I will add this feature.

What you can do with IFB currently with actions

What you can do with IFB currently with actions

Lets say you are running ing packets from alias 192.168.200.200/32 you dont want those to exceed 100 kbps going out.

TC filter add Dev eth0 parent 1: Protocol ip prio 10 u32 Match ip SRC 192.168.200.200/32 flowid action police rate 100 kbit burst 90 K drop

If you run tcpdump on eth0 you will see all packets going out with SRC 192.168.200.200/32 dropped or notextend the rule a little to see only the ones that made it out:

TC filter add Dev eth0 parent 1: Protocol ip prio 10 u32 Match ip SRC 192.168.200.200/32 flowid 1:2 action police rate 10 kbit burst 90 K Drop Action mirred egress mirror Dev ifb0

Now fire tcpdump on ifb0 to see only those packets ..

Tcpdump-n-I ifb0-X-e-t

Essentially a good debugging/logging interface.

If you replace mirror with redirect, those packets will be blackholed and will never make it out. This redirect behavior changes with new patch (but not the mirror ).

IFB example

Please readers have found this page to be unhelpful in terms of expressing how IFB is useful and how it shoshould be used usefully.

These examples are taken from a posting of Jamal at http://www.mail-archive.com/netdev@vger.kernel.org/msg04900.html

What this script will demonstrate is the following sequence:

Any packet coming going out on eth0 10.0.0.229 is classified as Class :10 and redirected to ifb0.
On reaching ifb0 the packet is classified as Class
Subjected to a token buffer shaping of rate 20 kbit/s
Sent back to eth0
On coming back to eth0, The classificaction :10 is still valid and this packet is put through an htb classifier which limits the rate to 256 kbps

What this script will demonstrate is the following sequence:
1) any packet coming going out on eth0 10.0.0.229 is classified
Class and redirected to ifb0.
2) A) on reaching ifb0 the packet is classified as Class
B) subjected to a token buffer shaping of rate 20 kbit/s
C) sent back to eth0
3) on coming back to eth0, The classificaction is still valid
And this packet is put through an htb classifier which limits the rate
To 256 kbps

Export Tc = "/sbin/TC"

$ TC qdisc del Dev ifb0 root handle 1: PRIO
$ TC qdisc add Dev ifb0 root handle 1: PRIO
$ TC qdisc add Dev ifb0 parent 1:1 handle 10: SFQ
$ TC qdisc add Dev ifb0 parent handle 20: TBF \
Rate 20, kbit buffer 1600, limit 3000
$ TC qdisc add Dev ifb0 parent handle 30: SFQ
$ TC filter add Dev ifb0 parent 1: Protocol ip prio 1 u32 \
Match ip DST 11.0.0.0/24 flowid 1:1
$ TC filter add Dev ifb0 parent 1: Protocol ip prio 2 u32 \
Match ip DST 10.0.0.0/24 flowid

Ifconfig ifb0 up

$ TC qdisc del Dev eth0 root handle 1: htb default 2
$ TC qdisc add Dev eth0 root handle 1: htb default 2
$ TC class add Dev eth0 parent 1: classid htb rate 800 kbit
$ TC class add Dev eth0 parent 1: classid htb rate 800 kbit
$ TC class add Dev eth0 parent classid htb rate 256 kbit Ceil 384 kbit
$ TC class add Dev eth0 parent classid htb rate 512 kbit Ceil 648 kbit
$ TC filter add Dev eth0 parent 1: Protocol ip prio 1 u32 \
Match ip DST 10.0.0.229/32 flowid :10 \
Action mirred egress redirect Dev ifb0

A little test (Be careful if you are sshed in and are classifying on
That IP, counters may not easy to follow)
-----

A ping...

Mambo :~ # Ping-C2 10.0.0.229

// First at ifb0

// Observe that second filter twice being successful

Mambo :~ # $ TC-s filter show Dev ifb0 parent 1:
Filter Protocol IP Pref 1 u32
Filter Protocol IP Pref 1 u32 FH 800: Ht divisor 1
Filter Protocol IP Pref 1 u32 FH 800: 800 order 2048 key HT 800 bkt 0 flowid
1:1 (Rule hit 2 Success 0)
Match 0b000000/ffffff00 at 16 (success 0)
Filter Protocol IP Pref 2 u32
Filter Protocol IP Pref 2 u32 FH 801: Ht divisor 1
Filter Protocol IP Pref 2 u32 FH 801: 800 order 2048 key HT 801 bkt 0 flowid
(Rule hit 2 success 2)
Match 0a000000/ffffff00 at 16 (success 2)

// Next the qdisc numbers ..
// Observe that has 2 packets

Mambo :~ # $ TC-s qdisc show Dev ifb0
Qdisc PRIO 1: bands 3 priomap 1 2 2 2 2 0 0 1 1 1 1 1 1 1 1
Sent 196 bytes 2 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Qdisc SFQ 10: parent limit 128 p quantum 1514b
Sent 0 bytes 0 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Qdisc TBF 20: parent rate 20000bit burst 1599b lat 546.9 Ms
Sent 196 bytes 2 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Qdisc SFQ 30: parent limit 128 p quantum 1514b
Sent 0 bytes 0 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0

// Next look at eth0
// Observe class which is where the pings went through after
// They came back from the ifb0 device.

Mambo :~ # $ TC-S Class show Dev eth0
Class htb 1:1 root rate 800000bit Ceil 800000bit burst 1699b cburst 1699b
Sent 196 bytes 2 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Lended: 0 borrowed: 0 giants: 0
Tokens: 16425 ctokens: 16425

Class htb parent PRIO 0 rate 256000bit Ceil 384000bit burst 1631b
Cburst 1647b
Sent 196 bytes 2 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Lended: 2 borrowed: 0 giants: 0
Tokens: 49152 ctokens: 33110

Class htb root PRIO 0 rate 800000bit Ceil 800000bit burst 1699b cburst 1699b
Sent 47714 bytes 321 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 3920bit 3pps backlog 0b 0 p requeues 0
Lended: 321 borrowed: 0 giants: 0
Tokens: 16262 ctokens: 16262

Class htb parent PRIO 0 rate 512000bit Ceil 648000bit burst 1663b
Cburst 1680b
Sent 0 bytes 0 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0
Lended: 0 borrowed: 0 giants: 0
Tokens: 26624 ctokens: 21251

-----
Mambo :~ # $ TC-s filter show Dev eth0 parent 1:
Filter Protocol IP Pref 1 u32
Filter Protocol IP Pref 1 u32 FH 800: Ht divisor 1
Filter Protocol IP Pref 1 u32 FH 800: 800 order 2048 key HT 800 bkt 0 flowid
(Rule hit 235 success 4)
Match 0a0000e5/ffffffff at 16 (success 4)
Action Order 1: mirred (Egress redirect to device ifb0) stolen
Index 2 ref 1 bind 1 installed 114 sec used 100 Sec
Action statistics:
Sent 196 bytes 2 Pkt (dropped 0, overlimits 0 requeues 0)
Rate 0bit 0pps backlog 0b 0 p requeues 0

IFB requirements

In order to use IFB you need:

Support for IFB on kernel (2.6.20 works OK)
Menu option: Device Drivers-> network device support-> intermediate functional block support
Module name: IFB
TC iproute2 with support of "actions" (2.6.20-20070313 works OK and package from Debian etch is outdated). You can download it from here: http://developer.osdl.org/dev/iproute2/download/

Ingress qdisc


All qdiscs discussed so far are egress qdiscs. each interface however can also have an ingress qdisc which is not used to send packets out to the network adaptor. instead, it allows you to apply tc filters to packets coming in over the interface, regardless of whether they have a local destination or are to be forwarded.

As the TC filters contain a full token bucket filter implementation, and are also able to match on the kernel flow estimator, there is a lot of functionality available. this variable tively allows you to police incoming traffic, before it even enters the IP stack.

14.4.1. Parameters & Usage

The ingress qdisc itself does not require any parameters. It differs from other qdiscs in that it does not occupy the root of a device. Attach it like this:

# Delete original
TC qdisc del Dev eth0 ingress
TC qdisc del Dev eth0 Root
# Add new qdisc and filter
TC qdisc add Dev eth0 ingress
TC filter add Dev eth0 parent FFFF: Protocol ip prio 50 u32 Match ip SRC 0.0.0.0/0 police rate 2048 Kbps burst 1 m drop flowid: 1
TC qdisc add Dev eth0 root TBF rate 2048 Kbps latency 50 ms burst 1 m


I played a bit with the ingress qdisc after seeing Patrick and Stef
Talking about it and came up with a few notes and a few questions.

: The ingress qdisc itself has no parameters. The only thing you can do
: Is using the schedulers. I have a link with a patch to extend this:
: Http://www.cyberus.ca /~ Hadi/patches/Action/maybe this can help.
:
: I have some more info about ingress in my mail files, but I have
: Sort it out and put it somewhere on Alibaba .org. But I still didn't
: Found the time to do so.

Regarding schedulers and the ingress qdisc. I have never used them before
Today, but have the following understanding.

About the ingress qdisc:

-Ingress qdisc (known as "FFFF:") can't have any children classes (hence the existence of imq)
-The only thing you can do with the ingress qdisc is attach Filters

About filtering on the ingress qdisc:

-Since there are no classes to which to direct the packets, the only reasonable option (reasonable, indeed !) Is to drop the packets
-With clever use of filtering, you can limit particle traffic signatures to particle uses of your bandwidth

QoS using IFB and ingress qdisc

Add some qdisc/class/filter to eth0/ifb0/ifb1 TC qdisc add Dev eth0 ingress 2>/dev/null

# Ingress Filter
TC filter add Dev eth0 parent FFFF: Protocol ip prio 10 u32 match u32 0 0 0 flowid 1:1 action mirred egress redirect Dev ifb0
# Egress Filter
TC filter add Dev eth0 parent 1: Protocol ip prio 10 u32 match u32 0 0 0 flowid 1:1 action mirred egress redirect Dev ifb1

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.