Numa Trade-offs

Source: Internet
Author: User

Now there are multiple CPUs and multiple blocks of memory on the machine. We used to see memory blocks as a chunk of memory, and all the CPU access messages to this shared memory were the same. This is the previously commonly used SMP model. However, as the processor grows, shared memory can cause more and more memory access conflicts, and performance cannot increase if memory access reaches the bottleneck. NUMA (Non-uniform Memory Access) is a model introduced in such an environment. For example, a machine has 2 processors and 4 blocks of memory. We put together 1 processors and two blocks of memory, called a NUMA node, so that the machine would have two NUMA node. On a physical distribution, NUMA node's processor and memory block have a smaller physical distance, so access is faster. For example, this machine will be divided around two processors (CPU1, CPU2), two memory blocks on each side of the processor (memory1.1, memory1.2, memory2.1,memory2.2), so Numa Node1 's CPU1 access to memory1.1 and memory1.2 is quicker than access to memory2.1 and memory2.2. Therefore, the use of NUMA mode if you can ensure that the CPU in this node only accesses the memory block within this node, then this is the highest efficiency.

Using NUMACTL-M and-physcpubind while running the program will make it possible to define which CPU and memory to run the program on. Topsy Cpu-topology gives a table when the program uses only one node resource and a comparison table that uses multiple node resources (almost 38s and 28s gaps). So it makes sense to limit the program to run in NUMA node.

But then again, is it OK to make Numa? --numa's trap. The crime of swap and the penalty article comes to the question of a NUMA trap. The phenomenon is that when your server has memory, it's already starting to use swap, which has even caused the machine to stall. This is probably due to NUMA restrictions, and if a process restricts it to use its own NUMA node memory, then when its own NUMA nodes memory uses light, it will not use other NUMA node memory, will start using swap, or even worse, When the machine is not set swap, it may crash directly! So you can use Numactl--interleave=all to remove the NUMA node restrictions.

In summary, the conclusion is that the use of NUMA is determined according to the specific business.

If your program is taking up large-scale memory, you should mostly choose to turn off NUMA node restrictions. Because at this point your program is likely to encounter NUMA traps.

Also, if your program does not occupy large memory, it requires faster program uptime. Most of you should choose to restrict access to this NUMA node-only method for processing.

Numa Trade-offs

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.