Cgroup test storage Device IOPS allocation

Source: Internet
Author: User

1 Use: Create Tree and attach subsystem

    • The first thing to do is to create the file system mount point as the root of the tree

Mkdir/cgroup/name

Mkdir/cgroup/cpu_and_mem

    • Mount this mount point to one or more subsystems

Mount-t Cgroup-o Subsystems Name/cgroup/name

Mount-t Cgroup-o cpu,cpuset,memory Cpu_and_mem/cgroup/cpu_and_mem

    • Look at the subsystem at this time

~]# Lssubsys-am

Cpu,cpuset,memory/cgroup/cpu_and_mem

Net_cls

Ns

Cpuacct

Devices

Freezer

Blkio

    • Re-mount

Mount-t Cgroup-o remount,cpu,cpuset,cpuacct,memory Cpu_and_mem/cgroup/cpu_and_mem

    • viewing subsystems

~]# Lssubsys-am

Cpu,cpuacct,cpuset,memory/cgroup/cpu_and_mem

Net_cls

Ns

Devices

Freezer

Blkio

    • Create Child Group:mkdir/cgroup/hierarchy/name/child_name
    • Mkdir/cgroup/cpuset/lab1/group1

Use: Process Behavior in the Root Control Group

For Blkio and CPU subsystems, the process under root cgroup and the process under sub-cgroup, the allocation of resources is different

For example, there is a root cgroup, the folder is/rootgroup, there are two sub-cgroup,/rootgroup/red/and/rootgroup/blue/

Under these three cgroup create Cpu.shares, and the value is set to 1 if you create a process under three cgroup, each process has a CPU share of One-third

However, when more processes are added to the sub-cgroup, the entire sub-cgroup still occupies one-third of the CPU

If you create two more processes inside root cgroup, it will be divided by the number of processes, that is, each process one-fifth

So when using Blkio and CPU, try to use sub-cgroup

SUBSYSTEM: Blkio

The subsystem controls and monitors the I/O access to the block device for tasks in the Cgroup.

Blkio.weight

Specifies the relative scale (weighted) of the default available access block I/O for cgroup, ranging from 100 to 1000

Blkio.weight_device

Specifies the relative scale (weighted) of the specific device I/O access available in Cgroup, ranging from 100 to 1000.

Blkio.throttle.read_bps_device

The upper limit on the number of read operations a device can perform. Entries has three fields:major, minor, and Bytes_per_second.

Blkio.throttle.write_bps_device

The upper limit on the number of write operations a device can perform.

With the increased ability to include storage devices within the server, especially with PCIe memory cards, the IOPS capacity is typically 10 tens of thousands of, which is immediately superfluous. In this case, a server can do a lot of things and run a lot of services on it. So how to ensure the service quality of the system is a very important thing.

We tend to use cgroup to isolate and limit resources in our projects because Cgroup is very inexpensive and easy to use. Cgroup can refer to this

We are particularly concerned with Cgroup's Blkio submodule, which has 2 modes of restriction:
1. Throttle, limit the IOPS or throughput that each process can use.
2. Weight, the percentage of IOPS per process that can be used now must be achieved through the CFQ scheduler.
Documentation and specific parameters can be seen in the Cgroup documentation mentioned above.


There are a few things to keep in mind when using Blkio's weight limitations:
1. Must go directio, if buffered IO because the process of final write Io is not the process of initiating IO, the result will be very big deviation.
2. The scheduler must be CFQ.
3. The test tool must support cgroup restrictions.
4. It is best to have random io.

Here is just a rough demo of how to use FIO to constrain the IO used by the process, let's construct the following scenario:

We are creating 2 1g size files for random mixed reads and writes, a ratio of 500 to 100, and a total ratio of 1000. Then theoretically you can see that a process can get more than 5 times times the IO capability of the B process.

The operation is as follows:

$ cat test.fio[Global]bs=4kioengine=libaioiodepth= +Direct=1RW=Randrwrwmixread= -Time_basedruntime= theCgroup_nodelete=1[Test1]filename=test1.datsize=1gcgroup_weight= -Cgroup=Test1[test2]filename=test2.datsize=1gcgroup_weight= -Cgroup=test2$ Cat/sys/block/sda/queue/Scheduler NoOp deadline [CFQ] $ sudo fio test.fiotest1: (g=0): RW=RANDRW, bs=4k-4k/4k-4k, Ioengine=libaio, iodepth= +test2: (g=0): RW=RANDRW, bs=4k-4k/4k-4k, Ioengine=libaio, iodepth= +Fio2.0starting2Processesjobs:2(f=2): [mm] [5.5% done] [618k/90k/s] [151/ AioPS] [ETA 02m:51s] ...

We can see the allocation of IO capability from another terminal:

$ sudo lssubsys-amcpusetnet_clsperf_eventcpu/sys/fs/cgroup/Cpucpuacct/sys/fs/cgroup/cpuacctmemory/sys/fs/cgroup/memorydevices/sys/fs/cgroup/Devicesfreezer/sys/fs/cgroup/Freezerblkio/sys/fs/cgroup/blkio$ pgrep-x Fio383738393840$ cat/sys/fs/cgroup/blkio/test1/Tasks3839$ cat/sys/fs/cgroup/blkio/test2/Tasks3840$ sudo iotop

It's almost 5:1, and it's in line with expectations.

We worry about the stability of the kernel when we use it, so we can use FIO to test the reliability of Cgroup module for a long time, and collect data as reference for application.

Have a good time!

Cgroup test storage Device IOPS allocation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.