Multipath for persistent LUN device names

Source: Internet
Author: User
Multipath implements LUN device name persistence. I. additional knowledge based on different transmission protocols, the network card can be divided into three types: An Ethernet card, an FC network card, and an iSCSI Network card. (1) Ethernet adapter: The learning name EthernetAdapter. the transmission protocol is the IP protocol. Generally, the persistence of the LUN device name is realized through optical fiber cables or dual Multipath. according to different transmission protocols, network cards can be divided into three types: Ethernet, FC, and iSCSI. (1) Ethernet Adapter: an Ethernet Adapter. the transmission protocol is an IP protocol. it is generally connected to an Ethernet switch through an optical fiber cable or twisted pair. Interfaces can be divided into optical ports and electrical ports. Optical interfaces are generally transmitted through optical fiber cables. the interface modules are SFP (transfer rate: 2 Gb/s) and GBIC (1 Gb/s), and the corresponding interfaces are SC, ST, and LC. Currently, the common electrical interfaces are RJ45, which are used to connect to twisted pair wires and interfaces connected to coaxial cables. However, these interfaces are rarely used. (2) FC Nic: It is also called a fiber Nic. it is also known as a fiber Channel HBA (Hose Bus Adapter: host Bus Adapter ). The transmission protocol is an optical fiber channel protocol, which is generally connected to an optical fiber channel switch through an optical fiber cable. Interfaces can be divided into optical ports and electrical ports. Optical interfaces are generally transmitted through optical fiber cables. the interface modules are SFP (transfer rate of 2 Gb/s) and GBIC (1 Gb/s), and the corresponding interfaces are SC and LC. The electrical interface type is generally DB9 needle or HSSDC. "Optical fiber network card" generally refers to the fc hba card, which is inserted on the server and used for external storage. The Ethernet card of the optical port is generally called "optical fiber Ethernet card" and is also inserted on the server, however, it is an Ethernet switch with optical ports. (3) ISCSI Nic: Internet Small Computer System Interface, student name iscsi hba, transmission ISCSI protocol, Interface type is the same as that of Ethernet card. ISCSI (internet SCSI) is a set of commands that define the transmission of SCSI protocols over TCP/IP networks. He expanded the initiator and target defined by SCSI from the original SCSI bus connection to the internet, breaking the storage distance limit defined by SCSI. Fc san and ip san are two popular SAN storage solutions: (1) after a SAN device is connected to the system, it is represented as one or more target IDs, its Logical allocation Unit is LUN-Logical Unit Number (Logical Unit Number ). (2) ip san is also called ISCSI (internet Small Computer System Interface ). The core of ISCSI technology is to transmit SCSI protocol over TCP/IP. it refers to the use of TCP/IP packets and ISCSI packets to encapsulate SCSI packets, this allows SCSI commands and data to be transmitted over a common Ethernet network. now let's take a look at Multipath. In addition to the above mentioned, Multipath implements the persistence of LUN devices. it also has another feature that supports multi-path round robin (improves I/O load capabilities ). You can access the Target device through multiple NICs. This improves the I/O capability. In the production environment, multipath is often used to implement LUN persistence and multi-path access. Note: There is a problem here. when we configure the target for storing multiple sessions, each Nic will generate a/dev/sd * device. This has been explained in yesterday's experiment. The lun device mentioned yesterday is immediately mapped to the available/dev/sd * device. here, a lun ING can also be applied to multiple/dev/sd * devices. Therefore, the persistence of LUN devices is very important. However, the ID of each target is unique. during multi-session access, no matter whether a lun is mapped to several devices, I use the target ID when configuring Multipath. This ensures the uniqueness of the target. II. install configuration 2.1 install Multipath to view related packages: [root @ rac1 ~] # Rpm-qa | grep device-mapperdevice-mapper-multipath-0.4.7-30.el5device-mapper-event-1.02.32-1.el5device-mapper-1.02.32-1.el5 if not installed, find this set of packages from the system installation file: device-mapper-1.02.32-1.el5.i386.rpmdevice-mapper-event-1.02.32-1.el5.i386.rpmdevice-mapper-multipath-0.4.7-30.el5.i386.rpm installation is simple: Rpm-Uvh device-mapper -*. rpm Description: (1) device-mapper-multipath provides tools such as multipathd and multipath. conf and other configuration files. These tools use the ioctr interface of device mapper to create and configure a multipath device (call the user space library of device-mapper. The created multi-path device will be in/dev/mapper) (2) the device-mapperdevice-mapper contains two parts: the kernel part and the user part. The kernel consists of the device-mapper core (multipath. ko) and some target driver (dm-multipath.ko. Dm-mod.ko is the basis of multipath implementation, dm-multipath is actually a target driver of dm. The core completes device ING, while the target processes the I/o from the mappered device based on the ing relationship and its own characteristics. At the same time, an interface is provided in the core section. you can use ioctr to communicate with the kernel to guide kernel driver behavior, such as how to create mappered devices and the attributes of these devices. The user space includes the device-mapper package. This includes the dmsetup tool and some libraries that help you create and configure mappered devices. These libraries are mainly abstracted and encapsulated interfaces for communication with ioctr to facilitate the creation and configuration of mappered devices. These libraries must be called in the device-mapper-multipath program. 2.2 configure ISCSI multi-session access to one iSCSI Initiator to connect to the same iSCSI Target device through multiple sessions to enable load balancing and failover using multiple NICs or iSCSI HBAs, it can also be called Multiple Sessions per Initiator. 2.2.1 disconnect the ISCSI logon status [root @ rac1 ~] # Iscsiadm-m node-T iqn.2006-01.com. san-p 192.168.6.1-uLogging out of session [sid: 1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260] Logout of [sid: 1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260]: successful-u indicates logout, and-l indicates login. You can use man iscsiadm to view details. 2.2.2 create an access interface file [root @ rac1 ~] # Iscsiadm-m iface-I iface0 -- op = newNew interface iface0 added [root @ rac1 ~] # Iscsiadm-m iface-I iface1 -- op = newNew interface iface1 added interface file stored in the/var/lib/iscsi/ifaces Directory [root @ rac1 ifaces] # cd/var/ lib/iscsi/ifaces/[root @ rac1 ifaces] # lsiface0 iface1 [root @ rac1 ifaces] # cat iface0 # begin record 2.0-871iface. iscsi_ifacename = iface0iface. transport_name = tcp # end record [root @ rac1 ifaces] # cat iface1 # BEGIN in RECORD 2.0-871iface. iscsi_ifacename = iface1iface. transport_name = tcp # end record [root @ rac1 ifaces] #2.2.3 configure iface [root @ rac1 ifaces] # iscsiadm-m iface-I iface0 -- op = update-n iface.net _ ifacename- v eth0iface0 updated. [root @ rac1 ifaces] # iscsiadm-m iface-I iface1 -- op = update-n iface.net _ ifacename-v eth1iface1 updated. 2.2.4 confirm iface configuration [root @ rac1 ifaces] # iscsiadm-m ifacedefault tcp, , , , Iser, , , , Iface1 tcp, , , Eth1, Iface0 tcp, , , Eth0, 2.2.5 search for ISCSI Target [root @ rac1 ifaces] # iscsiadm-m discovery-t st-p 192.168.6.1-I iface0-I iface1191166.1: 3260,1 iqn.2006-01.com. san192.168.6.1: 3260,1 iqn.2006-01.com. san 2.2.6 establish connection with Target [root @ rac1 ifaces] # iscsiadm-m node-lLogging in to [iface: iface1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260] Logging in to [iface: iface0, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3 260] Login to [iface: iface1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260]: successfulLogin to [iface: iface0, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260]: successful note: delete an invalid iscsi connection. If an iscsi connection is created due to incorrect configuration or other reasons, the system will not automatically delete the connection. you must manually delete the connection. For example: [root @ rac3 mapper] # iscsiadm-m node192.168.6.1: 3260,1 iqn.2006-01.com. san192.168.6.1: 3260,1 iqn.2006-01.com. san: We can see two iscsi mappings on the top. The system will not automatically delete the corresponding iscsi. Delete name: iscsiadm-m node-o delete-T iqn.2006-01.com. san-p 192.168.6.1: 3260 2.2.7 view connection status [root @ rac1 ifaces] # netstat-anp | grep 3260tcp 0 0 192.168.6.5: 63327 192.168.6.1: 3260 ESTABLISHED 2370/iscsid tcp 0 0 192.168.6.6: 32380 192.168.6.1: 3260 ESTABLISHED 2370/iscsid 2.3 the default configuration multipath configuration file of Multipath is:/etc/multipath. conf. Most of the configurations in this file are commented out. you can save them as backups, create a multipath. conf file, and edit the new configuration file. [Root @ rac1 etc] # cp multipath. conf multipath. conf. back 2.3.1 blacklist filtering multipath will add all devices to the blacklist (devnode "*"), that is, disable the use. Therefore, we need to cancel this setting and change the configuration file to something similar to the following: devnode_blacklist {# devnode "*" devnode "hda" wwid 3600508e000000000dc7200032e08af0b}. hda is not allowed here, that is, the optical drive. In addition, wwid is used to restrict the use of local sda devices. You can obtain the wwid through the following command: [root @ rac1 ~] #/Sbin/scsi_id-g-u-s/block/issue a redhat bug is found here. for details, refer to: scsi_id does not return WWID for/dev/sda with aacraid driver. https://bugzilla.redhat.com/show_bug.cgi?id=445696 Note that to obtain the wwid of a device, you must first use fdisk-l to view the device. If not, it cannot be obtained. At this time, you can try to restart the iscsi initiator, disable enable the NIC, and connect to the target: iscsiadm-m node-l. When fdsk-l can see the device, we can get the device's wwid normally. [Root @ rac1 ~] # Iscsiadm-m node-lLogging in to [iface: iface1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260] Logging in to [iface: iface0, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260] Login to [iface: iface1, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260]: successfulLogin to [iface: iface0, target: iqn.2006-01.com. san, portal: 192.168.6.1, 3260]: successful [root @ rac1 ~] # Fdisk-l... Disk/dev/sdf: 39.7 GB, 39795556352 bytes64 heads, 32 sectors/track, 37952 cylindersUnits = cylinders of 2048*512 = 1048576 bytes Disk/dev/sdf doesn't contain a valid partition table Disk/dev/sdg: 39.7 GB, 39795556352 bytes64 heads, 32 sectors/track, 37952 cylindersUnits = cylinders of 2048*512 = 1048576 bytes [root @ rac1 ~] #/Sbin/scsi_id-g-u-s/block/sdf14f504e46494c450034594d6462472d541545442d6a714841 [root @ rac1 ~] #/Sbin/scsi_id-g-u-s/block/sdg14f504e46494c450034594d6462472d542545442d6a714841 about: scsi_id is included in udev package and can be included in multipath. configure the program in conf to obtain the serial number of the scsi device. By serial number, you can determine that multiple paths correspond to the same device. This is the key to multi-path implementation. Scsi_id is an inquery command that sends EVPD page80 or page83 to the device through the sg driver to query the ID of the scsi device. However, some devices do not support the EVPD inquery command, so they cannot be used to generate multipath devices. However, scsi_id can be rewritten to provide a virtual identifier for the device that cannot provide the scsi device identifier and output it to the standard output. When creating a multipath device, the multipath program calls scsi_id to obtain the scsi id of the device from its standard output. When rewriting, you need to modify the scsi_id program's return value to 0. In the multipath program, the system checks the result to determine whether the scsi id has been successfully obtained. 2.3.2 edit the device-mapper-multipath or operating system release version with different default rules. The default rules are different. take the Hongqi DC Server 5.0 SP2 for x86_64 as an example, its path_grouping_policy is failover by default, that is, the master-slave mode. HDS supports multi-path load balancing and EMC CX300 only supports Failover. Friendly_name is allowed by default. Otherwise, the wwid of the device is used as the persistent name. modify the default rule: defaults {udev_dir/dev path_grouping_policy multibus failback immediate no_path_retry fail user_friendly_name yes} 2.3.3 to configure multipath. the conf file is accessed through multiple sessions. In this way, there are 2 devices/dev/sdf and/dev/sdg. They all correspond to a target. The above query shows that their wwid is the same. We use this wwid to configure them together as a device. Add the following content at the end of the file. [Root @ rac1 ~] # Cat/etc/multipath. conf | more... Multipaths {multipath {wwid mongorac-share path_grouping_policy multibus path_checker readsector0 path_selector "round-robin 0" failback manual rr_weight priorities no_path_retry 5} a target corresponds to a multipath. if multiple targets exist, write multiple multipath options. 2.3.4 restart the multipathd service and verify the configuration [root @ rac1 dev] # service multipathd restartDevice/dev/sda1 not foundCommand failedStopping multipathd daemon: [OK] Starting multipathd daemon: [OK] go to the/dev/mapper directory for verification: root @ rac3 mapper] # ls-lrt/dev/mapper/* crw ------- 1 root 10, 62 Nov 1/dev/mapper/controlbrw-rw ---- 1 root disk 253, 0 Nov 1/dev/mapper/rac-share use the multipath-ll command to view the two active paths. System. If one of the lines is disconnected, the system automatically switches to the other line. [Root @ rac3 mapper] # multipath-llrac-share (14f504e46494c450034594d6462472d542545442d6a714841) dm-0 OPNFILER, VIRTUAL-DISK [size = 37G] [features = 1 queue_if_no_path] [hwhandler = 0] [rw]/_ round-robin 0 [prio = 2] [active]/_ 2: 0: 0: 0 sdf 8: 80 [active] [ready]/_ 3: 0: 0: 0 sdg 8: 96 [active] [ready] 2.3.5 set the multipathd service to boot automatically [root @ rac3 mapper] # chkconfig multipathd on 2.3.6 partition or create PV. But there is a problem here. This is because the device is created and cannot be mounted. We need to partition the device or create it as PV. note: After fdisk is used for partitioning and saving, you must refresh the multipath ing table so that it can create the device identifier corresponding to the partition. (1) partition [root @ rac3 mapper] # fdisk/dev/mapper/rac-partition device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. changes will remain in memory only, until you decide to write them. after that, of course, the previuscontent won't be recoverable. the number of cylinders for this Disk is set to 4838. there is nothing wrong with that, but this is larger than 1024, and cocould in certain setups cause problems with: 1) software that runs at boot time (e.g ., old versions of LILO) 2) booting and partitioning software from other OSs (e.g ., dos fdisk, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w (rite) Command (m for help): nCommand action e exten Ded p primary partition (1-4) pPartition number (1-4): 1 First cylinder (1-4838, default 1 ): using default value 1 Last cylinder or + size or + sizeM or + sizeK (1-4838, default 4838): Using default value 4838 Command (m for help ): wThe partition table has been altered! Calling ioctl () to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. the kernel still uses the old table. the new table will be used at the next reboot. syncing disks. [root @ rac3 mapper] # multipath-F -- clear multi-path device cache [root @ rac3 mapper] # multipath-v3 -- reload [root @ rac3 mapper] # fdisk-l...... Disk/dev/sdf: 39.7 GB, 39795556352 bytes255 heads, 63 sectors/track, 4838 cylindersUnits = cylinders of 16065*512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdf1 1 4838 38861203 + 83 Linux Disk/dev/sdg: 39.7 GB, 39795556352 bytes255 heads, 63 sectors/track, 4838 cylindersUnits = cylinders of 16065*512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdg1 1 4838 388612 03 + 83 Linux Disk/dev/dm-0: 39.7 GB, 39795556352 bytes255 heads, 63 sectors/track, 4838 cylindersUnits = cylinders of 16065*512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/dm-0p1 1 4838 38861203 + 83 Linux partition after using the fdisk-l command to View disk will see disk the partition information is shown below. Then the disk can be mounted and used. (2) to create a PV and configure LVM, follow these steps: 1. create and initialize a Physical Volume (Physical Volume), and create a pv through pvcreate, that is, the pv phase; 2. add a physical Volume to a Volume Group and use vgcreate to add multiple PVS to the vg stage. 3. create a logical volume (logical volume) on the volume group and use lvcreate to divide the vg into one or more lv, that is, the lv stage. common commands: # pvcreate/dev/md0 # create PV # pvscan # vgcreate LVM1/dev/md0 # create VG # vgdisplay LVM1 # lvcreate-L 1.5 TB-n data1 LVM1 # create LV # lvcreate- L 325 GB-n data2 LVM1 # create LV # lvscan # View LV information # pvscan # View PV information again # vgdisplay LVM1 # View VG information again Mount command: # mount/dev/LVM1/data1/data1 # mount/dev/LVM1/data2/data2: edit/etc/fstab/dev/LVM1/data1/data1 ext3 defaults 2 2/dev/LVM1/data2/data2 ext3 defaults 2 2 example: [root @ rac3 mapper] # pvcreate/dev/mapper/rac-share Physical volume "/dev/mapper/rac-share" successfully created [root @ rac3 mapper] # vgcreate vg0/ dev/mapper/rac-share Volume group "vg0" successfully created [root @ rac3 mapper] # lvcreate-L 10 M-n lv1 vg0 Rounding up size to full physical extent 12.00 MB Logical volume "lv1" created [root @ rac3 mapper] # lvdisplay --- Logical volume --- LV Name/dev/vg0/lv1 VG Name vg0 lv uuid XkbDyS-btpZ-fIFA-MvBH-d4kl-hibU-RhuKu1 LV Write Access read/write LV Status available # open 0 LV Size 12.00 MB Current LE 3 Segments 1 Allocation inherit Read ahead sectors auto-currently set to 256 Block device 253: 1 [root @ rac3 mapper] # mkfs. ext3/dev/mapper/vg0-lv1 -- format mke2fs 1.39 (29-May-2006) Filesystem label = OS type: LinuxBlock size = 1024 (log = 0) fragment size = 1024 (log = 0) 3072 inodes, 12288 blocks614 blocks (5.00%) reserved for the super userFirst data block = 1 Maximum filesystem blocks = 125829122 block groups8192 blocks per group, 8192 fragments per group1536 inodes per groupSuperblock backups stored on blocks: 8193 Writing inode tables: done Creating journal (1024 blocks): doneWriting superblocks and filesystem accounting information: done This filesystem will be automatically checked every 28 mounts or180 days, whichever comes first. use tune2fs-c or-I to override.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.