Linux High Speed network card driver design essay __linux

Source: Internet
Author: User

Network system is the most complex and powerful subsystem in Linux design. This article is not to state the Orthodox Linux network system, but only focus on the link layer, that is, the so-called network card driver.

TCP/IP protocol family, each layer is assigned a different function, for different protocols and application scenario design, L2 is responsible for the network layer of the message sent to the physical media. NIC driver is the main part of L2.

The Linux kernel itself provides a framework for the development of the driver framework for network devices NetOps, and the kernel also integrates a large number of mainstream network card manufacturer drivers. But this kind of standard framework developed the driver, although it can be very good to fit the Linux network subsystem design, but in some scenarios of the application, it appears powerless. The core problem is performance.

The author of the industry, there are a large number of data surface or forwarding surface design, in the data surface or forwarding surface system, the need to support the requirements of the protocol stack is not high, UDP + IP combination is basically enough, or even simpler. But the request to the network card forwarding performance is very high, request achieves linerate. In the current, the industry 10G network card is already standard, 40G or even 100G is not lack. From the long-term test data, the traditional Linux kernel-driven network equipment is difficult to meet the performance requirements. Take a look at the bottleneck of the kernel state drive, give a theoretical analysis, and give a direct conclusion.

1 There is a memory copy. From the kernel state mbuf to the user state, there is a single memory copy

2 Dynamic buffer management. Memory buffer Dynamic Request release

3 The time delay of context switching exists in the kernel state user state switching

4 interrupt the way to drive the packet, will further increase the delay

5 large and complex kernel protocol stack processing


To solve these problems, the author summarizes two kinds of solutions and tries to state the design outline.

The first category is user-state driven, and the design profile is as follows:

1 The peripheral IO map to the user state, so that the user state program can directly access peripherals. This avoids the interference of the kernel protocol stack and the memory copy overhead between the kernel state user states.

2) Change the mode of interruption to enclose the query-type collection. Of course, you can also weigh the use of a multiple-interrupt query, the performance and CPU occupancy rate to make a certain balance between

3) Pre-allocating buffer. A buffer cache pool is implemented in the user state to populate the hardware. If the hardware supports self release buffer, the design here will be simpler

4 threads bind to specific cores to avoid delays and overhead associated with thread scheduling.

User-state driven performance compared to the kernel-state driver can improve a large section, the basic can meet the requirements of 10G or even higher. But there are some drawbacks to such schemes:

1 User state needs to implement a simple protocol stack, to some extent, will increase the workload

2 The system robustness and security has a certain reduction. Users can access and control peripherals directly. System intrusion cost is too low, so it is not suitable for some open application scenarios.


The second category is based on the kernel-driven optimization of the transformation, more typical for pf_ring and Netmap scheme. The details of the two open source projects are not expanded here, but are summarized from the design idea. Taking Netmap as an example

1) To achieve static reservation buffer to avoid the dynamic application of free buffer memory

2 The query-type of the receiver (by the user State to initiate the query, the core State with the completion)

3 using batch processing, one query completes multiple packets processing, and the cost of system call is brought by the allocation query. If the system call cost is 100ms, if 10 packets are received at a time, the cost of allocating to each package is 10ms.

4 implement user State and kernel State of contribution memory, as buffer. This only requires passing the pointer or offset between the kernel state and the user state, without copying the memory.

The thread binding technology implemented by user-State drive also needs to be applied here. The advantage of such schemes over user-state drivers is that:

1 User state can not access peripherals, increased security

2 performance is much higher than the traditional kernel-driven, slightly lower than the user-state drive, but can meet the needs of wire speed

3 Some time can borrow the kernel of the protocol stack. You can switch more flexibly between performance and functional integrity.

On the implementation of the NETMAP, the following time to write a topic. Do not expand here

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.