??
Questions:The log is displayed as follows at MongoDB login:
[[Email protected]_180 ~]$ mongo-u root-p xxxxx--authenticationdatabase Admin
MongoDB Shell version:2.6.4
Connecting To:test
Server has startup warnings:
2015-07-16t04:35:34.694+0800 [Initandlisten]
2015-07-16t04:35:34.694+0800 [Initandlisten] * * warning:you is running on a NUMA machine.
2015-07-16t04:35:34.694+0800 [Initandlisten] * * We suggest launching mongod like this to avoid performance PR Oblems:
2015-07-16t04:35:34.694+0800 [Initandlisten] * * numactl--interleave=all Mongod [other options]
2015-07-16t04:35:34.694+0800 [Initandlisten]
Solution:
1. Add Numactl–interleave=all before the original start command
Like # Numactl–interleave=all ${mongodb_home}/bin/mongod–config conf/mongodb.conf
2. Modifying kernel parameters
echo 0 >/proc/sys/vm/zone_reclaim_mode
Http://www.mongodb.org/display/DOCS/NUMA
Here is an introduction to NUMA:
I, NUMA and SMP
NUMA and SMP are two CPU-related hardware architectures. In the SMP architecture, all CPUs compete for a bus to access all the memory, the advantage is the resource sharing, and the disadvantage is the bus contention is fierce. As the number of CPUs on a PC server becomes more numerous (not just CPU cores), the drawbacks of bus contention are becoming increasingly apparent, and Intel has introduced a NUMA architecture on the Nehalem CPU, and AMD has introduced a Opteron CPU based on the same architecture.
The biggest feature of NUMA is the introduction of the concept of node and distance. For the two most valuable hardware resources, CPU and memory, NUMA divides the owning resource Group (node) in a nearly strict manner, and the CPU and memory within each resource group is almost equal. The number of resource groups depends on the number of physical CPUs (most of the existing PC servers have two physical CPUs and 4 cores per CPU); Distance this concept is used to define the cost of invoking resources between each node, providing data support for resource scheduling optimization algorithms.
Second, NUMA-related policies
1. Each process (or thread) inherits the NUMA policy from the parent process and assigns a priority node. A process can invoke resources on other node if the NUMA policy allows it.
2, NUMA CPU allocation policy has cpunodebind, Physcpubind. Cpunodebind rules the process to run on a few node, and physcpubind can more finely specify which cores to run on.
3, NUMA memory allocation policy has localalloc, preferred, Membind, Interleave. The LocalAlloc specifies that the process request allocates memory from the current node, while preferred more loosely designates a recommended node to fetch memory, and if there is not enough memory on the recommended node, the process can try another node. Membind can specify a number of node, and the process can only request memory from those specified node. The interleave specifies that the process requests the allocation of memory from a specified number of node interleaved with the RR algorithm.
Iii. the relationship between NUMA and swap
As you may have discovered, NUMA's memory allocation policy is not fair to the process (or thread). In existing Redhat Linux, LocalAlloc is the default NUMA memory allocation policy, which makes it easy for resource-exclusive programs to run out of memory for a node. And when a node's memory runs out, Linux just assigns the node to a process (or thread) that consumes a lot of memory, and the swap is properly generated. Although there are many page caches available at this time, there is even a lot of free memory.
Iv. Solving Swap Issues
Although the principle of NUMA is relatively complex, the actual solution to swap is simple: just use Numactl–interleave to modify the NUMA policy before starting MySQL.
It is important to note that the NUMACTL command not only adjusts the NUMA policy, it can also be used to view the current node's resources as a condition and is a very worthwhile command to study.
NUMA issues with MongoDB