Only way-slab and partner system for multi-core Linux kernel path optimization

Source: Internet
Author: User

As the first article in this series, let me describe the slab system first. Because I have discussed this topic with colleagues and friends in recent days, and feel that this topic is more typical, so as the first article. In fact, according to the operating system theory, process management should be more important, according to my own interests, IO Management and TCP/IP stack will be more weight, about these things, I will be given in succession.

The slab of the Linux kernel comes from a very simple idea, which is to prepare some data structures that will be allocated frequently and released in advance. However, the standard slab implementation is too complex and the maintenance cost is huge, therefore has differentiated the smaller slub, therefore this article discusses is Slub, after all mentions slab the place, refers to Slub. In addition, because this article mainly describes the content of kernel optimization, is not the basic principle of introduction, so want to know slab details and code to achieve the self-Baidu or read the source.

Simple slab on a single CPU

The sequence of the slab on a single CPU when allocating and releasing objects is given:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/73/41/wKiom1X4iTnjVDKEAAIPTmVYh4I132.jpg "title=" Slab1.jpg "alt=" Wkiom1x4itnjvdkeaaiptmvyh4i132.jpg "/>


It can be seen that it is very simple, and fully achieves the goal of the slab design at the beginning.

Scaling to multi-core CPUs

Now we simply extend the above model to a multi-core CPU, and the same similar allocation sequence as shown:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/73/3E/wKioL1X4i4Xh20s0AAHyN6dlfPY532.jpg "title=" Slab2.jpg "alt=" Wkiol1x4i4xh20s0aahyn6dlfpy532.jpg "/>


We see that when there is only a single slab, if multiple CPUs are allocating objects at the same time, the conflict is unavoidable, the only way to resolve the conflict is to lock the queue, but this will greatly increase the latency, and we see that the entire delay in applying for a single object starts at T0 and ends at T4, which is too long.
The direct idea of a multi-CPU non-locking parallelization operation is to replicate to each CPU a set of identical data structures.
Only way is to add "per CPU variable". For slab, it can be expanded to look like this:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/73/41/wKiom1X4iV2xRF9WAAGuLWLD-H4129.jpg "title=" Slab3.jpg "alt=" Wkiom1x4iv2xrf9waagulwld-h4129.jpg "/>


If you think it's over, it's pointless.

Problem

First, let's look at a simple question, if a separate CPU's slab cache has no objects to allocate, but the other CPU's slab cache still has a large number of idle objects, as shown in:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/73/3E/wKioL1X4i6OgORu8AAKEp6DrUEI705.jpg "title=" Slab4.jpg "alt=" Wkiol1x4i6ogoru8aakep6druei705.jpg "/>


This is possible because the requirement for a single slab is closely related to the processes/threads that are executed on the CPU, such as if CPU0 only handles the network, it will have a large demand for data structures such as SKB, and for the final question, if we choose to assign a new page from the partner system ( or pages, depending on the size of the object and the order of the slab cache, it will cause slab to be unevenly distributed between CPUs over time, and more likely to consume a lot of physical memory, which is not expected.
Before proceeding, it is important to be clear that we need to balance slab between CPUs, and these must be done by the internal mechanism of slab, which is completely different from the CPU load balancing between CPUs, and for the process, it has a core scheduling mechanism, such as time slice, or the step rate of the virtual clock, etc. However, for slab, it is entirely up to the user itself, so long as the object is still in use, it cannot deprive the user of the right to continue using it unless the user releases it himself. Therefore, the load balancing of slab must be designed as cooperative rather than preemptive.
All right. Now we know that reassigning a page from a partner system is not a good idea, it should be the final decision, and before executing it, try another route first.
Now, let's draw a second question, as shown in:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/73/41/wKiom1X4iYCTbG71AAFH3u_4cI0689.jpg "title=" Slab5.jpg "alt=" Wkiom1x4iyctbg71aafh3u_4ci0689.jpg "/>


No one can guarantee that the CPU that allocates the slab object and the CPU that frees the slab object are the same CPU, and no one can guarantee that a CPU is not assigned a new page (s) during the life cycle of a slab object, and that during this time the complex operations are not regulated. How should these problems be solved? In fact, understanding how these problems are solved, a slab framework is thoroughly understood.

Problem resolution-tiered slab cache

Infinitely variable speed is always a longing for people.
If a CPU's slab cache is full, it is considered a reckless and immoral practice to go directly to the slab cache of the CPU at the same level. So why not set up another slab cache, get the object inside it is not as simple and direct as the CPU slab cache, but the difficulty is not small, just a little bit more consumption, this is not very good? In fact, isn't the CPU's L1,L2,L3 cache designed for this scenario? This has in fact become the only way of the cache design. This design idea also works on slab, which is the slub implementation of the Linux kernel.

Now you can give a concept and an explanation.


Linux kernel Slab cache: An object cache model that is divided into 3 layers.
Level 1 Slab cache: A list of idle objects, one for each CPU, and the allocation of freed objects without locking.
Level 2 Slab cache: A list of free objects, one shared page (s) cache per CPU, allocates the release object only to lock the page (s), is mutually exclusive to Level 1 slab cache, and is not mutually inclusive.
Level 3 Slab Cache: A page (s) List of all CPU shared caches per NUMA node, in page (s), obtained after being promoted to the corresponding CPU Level 1 slab cache, and the page (s) as Level 2 Share page (s) exists.
Share page (s): the page (s) is occupied by one or more CPUs, each of which can have a list of idle objects that are not charged to each other on the page (s), and the page (s) has a unique level 2 slab cache Idle list, The linked list does not conflict with one or more of these level 1 slab cache idle lists, and multiple CPUs must scramble to get the Level 2 slab cache, which can then be promoted to their Level 1 slab cache.


The slab cache is illustrated as follows:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/73/3E/wKioL1X4i9KjzA5bAASCD4kRx00881.jpg "title=" Slab6.jpg "alt=" Wkiol1x4i9kjza5baascd4krx00881.jpg "/>


Its behavior is as follows:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/73/3E/wKioL1X4i-Szr5yXAAemBqbtA58768.jpg "title=" Slab7.jpg "alt=" Wkiol1x4i-szr5yxaaembqbta58768.jpg "/>


2 scenes

For the General object assignment process, the details are shown:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/73/41/wKiom1X4ib-SWvJjAAf4weYxyEU261.jpg "title=" Slab8.jpg "alt=" Wkiom1x4ib-swvjjaaf4weyxyeu261.jpg "/>


In fact, for multiple CPUs sharing a page (s), there can be another way to play, as shown in:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/73/3E/wKioL1X4jAaD6zkOAATtV6QRfW0598.jpg "title=" Slab9.jpg "alt=" Wkiol1x4jaad6zkoaattv6qrfw0598.jpg "/>


Partner system

We briefly experienced the Linux kernel slab design, not too long, too long difficult to understand. But finally, if level 3 doesn't get page (s), it eventually falls to the ultimate partner system.
The partner system is designed to prevent memory allocations from being fragmented, so it does two things as much as possible:

1). Allocate as much memory as possible 2). Try to merge contiguous chunks of memory into a large chunk of memory

We can use the following diagram to understand the above principles:


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/73/41/wKiom1X4ieTC39RgAAWN9sx0r2g463.jpg "title=" Slab10.jpg "alt=" Wkiom1x4ietc39rgaawn9sx0r2g463.jpg "/>


Note that this article is about optimization, not the partner system of popular Science, so I assume that everyone has understood the partner system.
Since most of the slab cache objects are small structures of no more than 1 pages (not only slab systems, memory requirements for more than 1 pages are less than the memory requirements of 1 pages), there is a significant memory allocation requirement for 1 pages. From the principle of distribution of the partner system, if the continuous large number of allocation of a single page, there will be a large number of orders greater than 0 split into a single page, on a single core CPU, this is not a problem, but on the multi-core CPU, because each CPU will do such a distribution, and the division of the partner system, The merge operation involves a lot of list operations, and the lock overhead is huge and therefore needs to be optimized!
The way the Linux kernel allocates "single page cache per CPU" to the partner system's allocation requirements for a single page!
Each CPU has a single page cache pool, and when a single page is required, you can retrieve the page from the current CPU's corresponding page pool without locking it. When there are not enough pages in the pool, the system pulls a bunch of pages from the partner system into the pool and, conversely, frees it to a single page cache per CPU when a single page is released.
In order to maintain "per CPU single page cache" in the number of pages will not be too much or too little (too many will affect the partner system, too little impact on the CPU requirements), the system maintained two values, when the number of cache pages lower than the low value, then from the partner system in bulk fetch the page into the pool, When the number of cached pages is greater than high, some pages are released to the partner system.

Summary

In a multi-CPU operating system kernel, the key overhead is the cost of locks. I think this is the beginning of the design, because at the outset, the multi-core CPU did not appear, the single core CPU on the shared protection is almost all can be used "no interruption", "no preemption" to simple implementation, to the multicore era, the operating system also simple translation to the new platform, So the synchronous operation was added later on the basis of a single core. In short, the current mainstream operating systems are created in the single-core era, so they are all in a single-core environment, and for multicore environments, they may have a problem with their first design.
In any case, the only way of the optimization operation is to prohibit or minimize the operation of the lock. The idea is to create a "cache per CPU" for the shared key data structure, which is divided into two types:

1). Data path Cache.

such as the routing table and other data structures, you can use the RCU lock to protect, of course, if you create a local route table cache for each CPU, it is also good, the question is when to update them, because all the cache is peer, so a batch synchronization mechanism is necessary.

2). Management mechanism cache.

For example, the Slab object cache, whose lifecycle depends entirely on the user, does not have synchronization problems, but there are management issues. The idea of using hierarchical cache is good, this is very similar to the CPU L1/L2/L3 cache, the use of this smooth overhead gradually increased, the capacity of the gradual increase of the mechanism, and with the design of good swap/swap algorithms, the effect is very obvious.



Only way-slab and partner system for multi-core Linux kernel path optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.