Use dual-nic binding and multi-path to prevent SPOF from achieving high availability

Source: Internet
Author: User

Use dual-nic binding and multi-path to prevent SPOF (SPOF) for high-availability dual-nic binding temporarily: modprobe bonding miimon = 100 mode = 0 (listen every 100 milliseconds) (0 load balancing and fault tolerance, 1 fault tolerance) ifconfig bond0 172.17.125.50 netmask 255.255.255.0 upifenslave bond0 eth0 eth1cat/proc/net/bonding/bond0 permanent. confalias bond0 bondingoptions bond0 mode = 1 miimon = 100 use_carrier = 0 primary = eth0primary specifies which Nic is used first (eth0) use_carrier how to determine the state of a link miimon how many milliseconds to listen to the state of a link creating a ifcfg-bond0DEVICE = bond0IPADDR = X. x. x. xNETMASK = 255.255.255.0GETEWAY = x. x. x. xONBOOT = yesBOOTPROTO = static configure ifcfg-eth0/eth1DEVICE = eth0/eth1MASTER = bond0SLAVE = yesONBOOT = yesBOOTPROTO = static set up ISCSI because the multi-path is used for storage, therefore, we need to first set up the ISCSI service to share out the LUN targets end configuration: install scsi-target-utilsyum install-y scsi-target-utilschkconfig tgtd on configuration example <target iqn.2013-3.com.example.com. cluster: iscsi> backing-store/dev/vol0/iscsi shared storage device path initiator-address x. x. x. x restricted accessible address wr Ite cache off disable write cache incominguser username password Set client authentication, password needs more than 12 digits </target> set server push authentication vim/etc/rc. localtgtadm -- lld iscsi -- op new -- mode account -- user redhat_in -- password redhat123_intgtadm -- lld iscsi -- op bind -- mode account -- tid 1 -- user redhat_in -- outgoing view shared LUN status tgt-admin -sservice tgtd start initiator: install iscsi-initiator-utilsyum install-y iscsi-initiator-utilsiscsiadm-m discovery-t [St | sendtargets]-p x. x. x. x Discovery Device iscsiadm-m node [-L all |-T iqn. xxx] loading all detected iscsiadm-m node [-U all |-T iqn. xxx] detaching all iscsiadm-m node-o delete [-T iqn. xxx] to delete all detected iscsiadm-m node-S to view the device status, you need to optimize the link and set the default MTU 1500 to a maximum of 9000, to increase the transmission rate configuration Nic ifcfg-eth0 with MTU = 9000 Note: All the switching routes on all links need to be expanded to 9000, as long as one Nic is not configured, the link will transmit the Flow control based on the minimum MUT (if the traffic is large, the other party will pause for a while). ISCSI will be re-transmitted as a TCP protocol, you do not need to enable Flow Control. Otherwise, ethtool-a may fail to check whether flow control is enabled. Thtool-A eth0 autoneg off rx off tx off disable traffic control set authentication username and password vim/etc/iscsi. conf enables the CHAP authentication function node. session. auth. authmethod = CHAP set the user name and password for client authentication node. session. auth. username = usernamenode. session. auth. password = password: Set the user name and password for pushing authentication on the server node. session. auth. username_in = usernamenode. session. auth. password_in = password multi-path multipath install device-mapper and device-mapper-multipath package yum install-y device-mapperyum inst All-y device-mapper-multipath load multipath and wheel finding module modprobe dm_multipathmodprobe dm_round_?chkconfig multipathd on copy sample configuration file cp/usr/share/doc/device-mapper-multipath-0.4.9/multipath. conf/etc/multipath. conf configuration/etc/multipath. by default, multipath adds all devices to the blacklist (devnode. Therefore, we need to cancel this setting and change the configuration file to something similar to the following: devnode_blacklist {# devnode "*" devnode "hda" wwid 3600508e000000000dc7200032e08af0b}. hda is not allowed here, that is, the optical drive. In addition, the local sda device is also restricted. This wwid can be obtained through the following command: scsi_id-g-u-s/block/sda (old) scsi_id -- page = 0x83 -- whitelisted -- device =/dev/sda (new) 3600508e000000000dc7200032e08af0b defadefadefadefadefamultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultimultidefadefadefadefadefadefayes- p x. x. x. xiscsiadm-n discovery-t st -P x. x. x. xiscsiadm-n node-L allmultipath-v2, multiple devices pointing to the same link are generated in the/dev directory: /dev/mapper/mpathn/dev/mpath/mpathn/dev/dm-n but their sources are completely different: /dev/mapper/mpathn is a multi-path device virtualized by multipath. We should use this device;/dev/mpath/mpathn is created by the udev Device Manager, in fact, it is directed to the following dm-n device, which is used only for convenience and cannot be mounted./dev/dm-n is used internally by the software and cannot be used outside the software. Simply put, we should use the device character under/dev/mapper. You can use fdisk to partition the device or create a pv. Before partitioning and creating LVM, I considered that all the devices in the system iostat were dm-n, so I always operated on dm-n directly. However, this creates a problem, that is, partitions cannot be created. The operation on the/dev/mapper/mpathn device does not solve this problem. Note that after fdisk is used for partitioning and saving, you must refresh the multipath ing table so that it can create the device identifier corresponding to the partition. For example: fdisk-l/dev/mapper/mpath0 Disk/dev/mapper/mpath0: 214.7 GB, 214748364800 bytes 255 heads, 63 sectors/track, 26108 cylinders Units = cylinders of 16065*512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/mapper/mpath0p1 1 26108 209712478 + 83 Linux multipath-F multipath-v2 ll/dev /mapper/mpath0p1 brw-rw ---- 1 root disk 253, 2 5 At/dev/mapper/mpath0p1, mpathn or its partition can be used for pv usage: pvcreate/dev/mapper/mpath0p1 vgcreate test/dev/mapper/mpath0p1 lvcreate-L 1g-n lv1 test lvdisplay mkfs. ext3/dev/test/lv1 ※note: Some multipath versions are compatible with lvm according to the online data. The specific manifestation is that the lvm is created using the device-mapper device. After the device is restarted, although the lvm still exists, the devices in/dev/mapper are lost. To prevent exceptions, we recommend that you modify the lvm configuration file/etc/lvm. conf, add: types = ["device-mapper", 1] The simplest test method is to use dd to read and write data to the disk, then, use iostat to observe the traffic and status of each channel to determine whether the Failover or load balancing mode is normal: dd if =/dev/zero of =/dev/mapper/mpath0 iostat-k 2. In addition, if the cluster environment is built by multiple servers, to keep the order of the mpathn devices recognized by each server in sequence, you need to bind the wwid. For details, refer to the following content in "Custom device name. By default, the device name of mpathn is generated based on the definitions in multipath. conf. defaults. Of course, we can also customize it. However, the main reason is that when multiple servers connect to the storage in the same way, the mpathn sequence identified by each server may be different. To form a cluster, we need to fix the order of device names recognized by each machine to be consistent (BIND wwid ). Modify the configuration file and add: multipaths {multipath {wwid 360060e80058e980000008e9800000007 alias mpath0} to refresh the multipath ing table. Then, mpath0 corresponds to the wwid device one by one. In addition to the alias, you can also define other properties for the device. For details, refer to the style on multipath. conf. Assign this configuration to other machines in the same cluster. Then, the order of the mpathn devices recognized by each machine is the same. ※Note: 1. After binding, You need to regenerate the path ing table. 2. When you bind the wwid, devices that are not bound will not be available, you cannot see these devices with-ll, but they are visible in/var/lib/multipath/bindings.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.