Application of device-mapper-multipath on Linux

Source: Internet
Author: User
The Application of device-mapper-multipath on Linux-Linux Enterprise Application-Linux server application information. The following is a detailed description. These two days I have been testing whether RedFlag DC5.0 for pSeries can be normally connected to EMCCLARiiON CX series storage on IBM OpenPower. The device-mapper-multipath Tool Based on device-mapper is also tested.

Here is my test note. Let's look at it:

Purpose:

In the OpenPower environment, connect to emc cx storage through the Red Flag DC5.0, and use the device-mapper-multipath software that comes with the system to test the multi-path redundancy and load balancing functions, to ensure the high availability and high reliability of user data.

Test method:

To ensure the availability and reliability of user data, the test method is as follows:

1. availability test: OpenPower is correctly connected to the emcstorage through an optical fiber switch, and can correctly identify and read and write data in the Red Flag Linux on OpenPower.

2. Reliability Test: Through MPIO (MultiPath IO, that is, multi-path IO), data is read and written to EMC storage to obtain the data in the relevant IO transmission status. In addition, during data reading and writing, the optical fiber cables of the active controller connected to EMC storage are unplugged separately, resulting in path failure. In this way, the path switching performance is investigated, and then the optical fiber cables are re-connected, to test the path "Healing" performance.

Test Results

Through a one-day intense test, the test was successful to achieve the expected goal of this test.

On the OpenPower model, the red-flag Linux system can be used:

1. Access emc cx storage correctly.

2. You can use the device-mapper-multipath software that comes with the system to access emc cx storage in a load balancing solution.

3. When a single controller in an optical fiber or emc cx storage fails, the system automatically switches the path and can read and write storage devices continuously to ensure data reliability.

Test environment:

Server: OpenPower 710 (1 CPU/1G Mem/73.4G)

Optical fiber card: Two Emulex lp10000 optical cards

Vswitch: one for Brocade 2400 and one for McData 140M

Storage: EMC CX500 real000019 (Dual Controller)

Operating System: RedFlag DC 5.0 for IBM pSeries

Test procedure

1. Deploy the test environment. Two Emulex light cards on the host are connected to the optical switch, and the optical switch is connected to emc cx storage to form the SAN topology. And allocate one gb lun to the host.

2. Install the operating system on the host (preparations before testing) and install the latest device-mapper-multipath package.

The device-mapper-multipath user tool is used to verify the multi-path load balancing and path failure switching functions:

Use the fdisk command to view the four disk devices identified by the system. This is the device name obtained from multiple paths. It actually points to the same LUN on the storage, this indicates that the red-flag operating system has correctly identified the Luns assigned by emc cx storage and prepared for the next multi-path management step. Command and output are as follows:

# Fdisk-l

Disk/dev/sdf: 103 GB, 107374182400 bytes

64 heads, 32 sectors/track, 102400 cylinders

Units = cylinders of 2048 0*512 = 107374182400 bytes

Disk/dev/sdf doesn' t contain a valid partition table

Disk/dev/sdh: 103 GB, 107374182400 bytes

64 heads, 32 sectors/track, 102400 cylinders

Units = cylinders of 2048 0*512 = 107374182400 bytes

Disk/dev/sdh doesn' t contain a valid partition table

Disk/dev/sdj: 103 GB, 107374182400 bytes

64 heads, 32 sectors/track, 102400 cylinders

Units = cylinders of 2048 0*512 = 107374182400 bytes

Disk/dev/sdj doesn't contain a valid partition table

Disk/dev/sdl: 103 GB, 107374182400 bytes

64 heads, 32 sectors/track, 102400 cylinders

Units = cylinders of 2048 0*512 = 107374182400 bytes

Disk/dev/sdl doesn' t contain a valid partition table

In fact, the four devices correspond to a LUN, which is only seen in different paths.

3. Start the multi-path management software

# Modprobe dm-multipath (load the dm-multipath kernel module)

Note: This module is not loaded by default when the system is started. If the application deployment is required, you can customize it when the system starts.

#/Etc/init. d/multipathd start (start the multipath daemon Service)

# Multipath? V3 (multi-path device assembly)

# Multipath-ll (display the current multi-path topology)

3600601604b991100f4e5b5c83ef5da11

[Features = "1 queue_if_no_path"] [hwhandler = "1 emc"]

\ _ Round-robin 0 [active]

\ _ 1: 0: 2: 1 sdf 8: 80 [ready] [active]

\ _ 2: 0: 1: 1 sdl 8: 176 [ready] [active]

\ _ Round-robin 0 [enabled]

\ _ 1: 0: 3: 1 sdh 8:112 [ready] [active]

\ _ 2: 0: 0: 1 sdj 8:144 [ready] [active]

The device is divided into two groups. In fact, the device is seen through the two controllers. The status of one group is [active], indicating that this is the current active controller. The next step is to read and write the device through/dev/sdf and/dev/sdl under the controller. Only when the [active] controller fails or the Tresspass is executed will the devices/dev/sdh,/dev/sdj under the controller in the [enabled] status be enabled. This will be confirmed in subsequent tests.

4. Create partitions required for testing on emc cx Storage

# Pvcreate/dev/dm-0 (Create physical volume)

# Vgcreate vgtest/dev/dm-0 (create a volume group)

Volume group "vgtest" successfully created

# Lvcreate-L + 50G-n lvtest1 vgtest

Logical volume "lvtest1" created

5. Load Balancing Test

Run the dd command to write the device and view the I/0 status through iostat. The command and output are as follows:

# Dd if =/dev/zero of =/dev/vgtest/lvtest1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 3.47 48.51 47.52

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.00 0.00 0.00 0 0

Sdf 756.57 6044.44 0.00 5984 0

Sdh 0.00 0.00 0.00 0 0

Sdj 0.00 0.00 0.00 0 0

Sdl 334.34 2682.83 0.00 2656 0

Through the above output, we can see that when reading and writing to/dev/vgtest/lvtest1, it is actually through all the devices that are currently active to/dev/md-0, that is,/dev/sdf, the two paths/dev/sdl are used to write the actual LUN.

6. Test path Switching

First, we unplug the fiber optic cable of port A on the server. After less than 10 seconds, we can see: MPIO successfully switched from the "failed" path/dev/sdl to another path/dev/sdf. The output sample is as follows:

# Iostat 1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 6.47 46.77 46.27

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.98 0.00 7.84 0 8

Sdf 1709.80 13678.43 0.00 13952 0

Sdh 0.00 0.00 0.00 0 0

Sdj 0.00 0.00 0.00 0 0

Sdl 0.00 0.00 0.00 0 0

Next, we re-connected the fiber optic cable. This time it was less than 10 seconds (this value can be set) and verified the path "Healing ". The output sample is as follows:

# Iostat 1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 3.48 48.76 47.26

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.00 0.00 0.00 0 0

Sdf 504.00 4024.00 0.00 4024 0

Sdh 0.00 0.00 0.00 0 0

Sdj 0.00 0.00 0.00 0 0

Sdl 594.00 4760.00 0.00 4760 0

The same test results can be reproduced on another optical fiber card on the server.

7. Controller switchover Test

In the same way, unplug an optical fiber cable from the currently Active emc cx storage controller and you can see the following output:

# Iostat 1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 6.47 46.77 46.27

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.98 0.00 7.84 0 8

Sdf 1609.80 13670.2 0.00 14972 0

Sdh 0.00 0.00 0.00 0 0

Sdj 0.00 0.00 0.00 0 0

Sdl 0.00 0.00 0.00 0 0

The output shows that the path also switches from invalid/dev/sdl to/dev/sdf, which is the same effect as removing a fiber optic line connected to the server.

Next, we re-insert the optical fiber. in less than 10 seconds, we can see the following output information.

# Iostat 1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 3.48 48.76 47.26

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.00 0.00 0.00 0 0

Sdf 504.00 4024.00 0.00 4024 0

Sdh 0.00 0.00 0.00 0 0

Sdj 0.00 0.00 0.00 0 0

Sdl 594.00 4760.00 0.00 4760 0

This indicates that the path has been successfully healed.

Finally, we unplug the optical fiber that is currently in the active status. After about 10 seconds, the output information is as follows:

# Iostat 1

Avg-cpu: % user % nice % sys % iowait % idle

0.50 0.00 7.50 46.00 46.00

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn

Sda 0.00 0.00 0.00 0 0

Sdf 0.00 0.00 0.00 0 0

Sdh 1910.00 15264.00 1552.00 15264 1552

Sdj 149.00 976.00 15024.00 976

Sdl 0.00 0.00 0.00 0 0

At the same time view CX300 information, it is found that CX300 has successfully executed Tresspass, and the implementation of this function is through the system's built-in dm-emc.ko kernel module to trigger CX300 to execute, this is basically the same as that of EMC's PowerPath software.

The same test results can be reproduced on the second controller.

Test conclusion

Based on the above test steps and output information, the following conclusions can be obtained:

On the OpenPower model, install the red-flag Linux DC 5.0 system and connect to emc cx storage, which can ensure the following:

1. The red-flag Linux operating system can correctly access and use emc cx storage devices.

2. By using the device-mapper-multipath software that comes with the Red Flag Linux system, you can access the emc cx storage device using the load balancing MPIO solution.

3. When a single controller in an optical fiber or emc cx storage fails, the system automatically switches the path and can read and write storage devices continuously to ensure data reliability.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.