Ingress traffic control (Entry Traffic Control) for using virtual NICs in Linux)

Source: Internet
Author: User

The Linux kernel implements a data packet queue mechanism, which works with a variety of queuing policies to achieve perfect traffic control and Traffic Shaping (hereinafter referred to as traffic control ). Traffic control can be implemented in two places: egress and ingress. egress is the action trigger point before the data packet is sent, while ingress is the action trigger point after the data packet is received. The traffic control of Linux is asymmetrical in these two locations, that is, Linux does not implement the queue mechanism in the position of ingress. Therefore, you can hardly implement throttling on the ingress.

Although iptables can also be used to simulate throttling, If You Want To implement throttling with a real queue, you really need to find a solution. Maybe, just like the core idea of E-mail, you can control sending perfectly, but have no control over receiving. If you absorb this idea, you can understand the difficulty of ingress queue traffic control. However, it is only possible.

Linux implements a simple flow control by using a non-queue mechanism at the position of ingress. Aside from the disadvantages of the non-queue mechanism over the queue mechanism, the position of ingress alone shows that we have almost zero control over it. Before the data enters the IP layer, ingress cannot mount any hooks on the IP layer, and the PREROUTING of Netfilter cannot be customized, even the IPMARK cannot be seen, not to mention the connection to the socket. Therefore, it is difficult for you to configure a queuing policy. You can only see the IP address and port information.

A realistic requirement for ingress traffic control is to control the upload of local service client data, such as uploading large files to the server. On the one hand, the CPU pressure can be released at the underlying layer, and data other than the CPU processing capability can be discarded in advance. On the other hand, the I/O of user-mode services can be smoother or smoother, depending on the policy.

Since there is a demand, we must try to meet the demand. What we know at present is that we can only perform throttling On egress, but cannot make the data really outgoing. In addition, we need to do a lot of policies, which are far from just IP addresses, protocols, the 5-tuples of the port can be given. An obvious solution is to use a virtual network card, as shown in the figure below:

The above schematic diagram is very simple, but there are several details for implementation. The most important thing is the routing details. We know that, even for a policy route, you must start searching from the local table unconditionally. If the target address is a local address, if you want data to follow the preceding process, you must delete the address from the local table. However, once deleted, the local machine will no longer respond to ARP requests for the address. Therefore, several solutions can be used:

1. Use static ARP or ebtables to change ARP, or use arping to actively broadcast arp configurations;

2. Use a non-local address, modify the xmit function of the virtual network card, and set the DNAT address internally, which bypasses the local table.

Without considering the details, you can do a lot of things in PREROUTING of the conventional path, such as using socket match to associate socket, or using IPMARK.

As shown in the figure above, we can actually implement a usable one. First, you must implement a virtual network card. Follow the loopback interface as a virtual interface for traffic control. First, create a virtual Nic device for the ingress traffic control.

Dev = alloc_netdev (0, "ingress_tc", tc_setup );

Then initialize its key fields

Static const struct net_device_ops tc_ops = {
. Ndo_init = tc_dev_init,
. Ndo_start_xmit = tc_xmit,
};
Static void tc_setup (struct net_device * dev)
{
Ether_setup (dev );
Dev-> mtu = (16*1024) + 20 + 20 + 12;
Dev-> hard_header_len = ETH_HLEN;/* 14 */
Dev-> addr_len = ETH_ALEN;/* 6 */
Dev-> tx_queue_len = 0;
Dev-> type = ARPHRD_LOOPBACK;/* 0x0001 */
Dev-> flags = IFF_LOOPBACK;
Dev-> priv_flags & = ~ IFF_XMIT_DST_RELEASE;
Dev-> features = NETIF_F_SG | NETIF_F_FRAGLIST
| NETIF_F_TSO
| NETIF_F_NO_CSUM
| NETIF_F_HIGHDMA
| NETIF_F_LLTX
| NETIF_F_NETNS_LOCAL;
Dev-> ethtool_ops = & tc_ethtool_ops;
Dev-> netdev_ops = & tc_ops;
Dev-> destructor = tc_dev_free;
}

Then construct its xmit Function

Static netdev_tx_t tc_xmit (struct sk_buff * skb,
Struct net_device * dev)
{
Skb_orphan (skb );
// Directly go through the second layer!
Skb-> protocol = eth_type_trans (skb, dev );
Skb_reset_network_header (skb );
Skb_reset_transport_header (skb );
Skb-> mac_len = skb-> network_header-skb-> mac_header;
// Receive locally
Ip_local_deliver (skb );

Return NETDEV_TX_ OK;
}

Next, we will consider how to import data packets to the virtual Nic. There are three options:

Solution 1: If you do not want to set arp-related items, You have to modify the kernel. Here I introduced a routing flag, RT_F_INGRESS_TC. All the routes with this flag are imported into the built virtual network card. For the sake of strategization, I did not write this in the code, instead, it changes the search order of the RT_F_INGRESS_TC route, first searches for the policy route table, and then searches for the local table. In this way, you can use the policy route to import data packets to the virtual network card.

Solution 2: Build a Netfilter HOOK and set NF_QUEUE of the data to be throttled to the virtual Nic In its target, that is, set skb-> dev as the virtual Nic In the handler of the queue, call dev_queue_xmit (skb), and the virtual network card does not conform to the figure above. The new schematic is relatively simple. You only need to reinject the data packet in the hard_xmit of the virtual network card. (As a matter of fact, I learned later that IMQ was actually implemented in this way. Fortunately, I didn't do anything useless)

Solution 3: This is a quick test solution, that is, my original idea. I will delete the target IP address from the local table and then manually run arping. My test is based on this solution, and the results are good.

No matter how the above solution changes, it is an effect after all, that is, since the NIC's ingress cannot throttling, it should be done on the egress, rather than using the physical Nic, the virtual network card can be customized to meet any needs. We can see how powerful the virtual network card is, tun, lo, nvi, tc... all of this, all of which are subtle in their respective xmit functions.

For more details, please continue to read the highlights on the next page:

Compile a Linux virtual network card to implement NVI-like

Linux driver development: Virtual Network Card for network devices

Implement Virtual Nic in Linux

Ubuntu tutorial-create a virtual network card in Ubuntu

  • 1
  • 2
  • Next Page

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.