Ifb principle of LinuxTC and ingress Traffic Control
First, paste the annotation of the ifb. c file header of the Linux kernel:
The purpose of this driver is to provide a device that allows
Authors: Jamal Hadi Salim (2005)
Like tun, ifb is also a virtual network card. Like tun, ifb also makes a fuss about where the data packets come from and where they go. For tun, data packets are sent to character devices in xmit, while data packets written from character devices simulate a rx operation on the tun Nic. For ifb, the situation is similar to this.
The ifb driver simulates a virtual network card. It can be seen as a virtual network card with only TC filtering. It only supports filtering because it does not change the direction of data packets, that is, after the outbound packet is redirected to the ifb, after the ifb TC filter, the packet is still sent by the NIC before the redirection. For the packets received by a nic, after being redirected to ifb, after being filtered by the TC of ifb, the NIC before the redirection continues to receive data packets, whether from a network card or from a network card, after redirecting to ifb, all requests must go through a dev_queue_xmit operation through the ifb virtual network card. After talking about this, you can see the following figure:
The Linux TC in the ingress queue is a framework of control and distribution control, but this is for the location where TC is placed, rather than the limits of TC itself. In fact, you can implement a queue mechanism on your own on the ingress point, saying that TC control is not controlled only because the current implementation of Linux TC does not implement the ingress queue.
In addition to the ingress queue, multiple NICs of Qdisc share one root Qdisc among multiple NICs. This is another intention of ifb implementation. It can be seen from the annotations of the file header. If you have 10 NICs and want to implement the same traffic control policy on these 10 NICs, do you need to configure 10 times? Extract the same items, implement an ifb virtual network card, and redirect all the traffic of the 10 network cards to the ifb virtual network card, in this case, you only need to configure a Qdisc on the virtual network card.
Performance problems may be caused by redirecting traffic from multiple NICs to one ifb Nic, isn't it necessary to queue all packets processed by different CPUs in different Nic queues to a queue of the ifb virtual Nic for one CPU to process? In fact, this worry is redundant.