Some knowledge about storage
According to the different transmission protocols, the network card can be divided into three kinds, one is the Ethernet card, the second is the FC network card, the third is the iSCSI network card.
- Ethernet Card: The Scientific name Ethernet Adapter, the transmission protocol for the IP protocol, generally via fiber optic cable or twisted pair with Ethernet switch connection. The interface types are divided into optical and electrical ports. Optical ports are generally transmitted via fiber optic cables, interface modules are generally SFP (transmission rate 2gb/s) and Gbic (1gb/s), the corresponding interface is SC, ST and LC. The current commonly used interface type is RJ45, used to connect with twisted pair, there is also the interface with coaxial cable, but it has been used relatively little.
- FC Network card: Generally also known as fiber-optic network card, scientific name Fibre Channel HBA (Hose bus Adapter: Host bus adapter). The transport protocol is a fibre Channel protocol that is typically connected to Fibre Channel switches through fiber optic cables. The interface types are divided into optical and electrical ports. Optical ports are generally transmitted via fiber optic cables, interface modules are generally SFP (transmission rate 2gb/s) and Gbic (1gb/s), the corresponding interface is SC and LC. The interface type of the electric port is usually DB9 pin or HSSDC. "Fiber-optic network card" is generally referred to as FC HBA card, plugged into the server, external storage with the fiber switch, and the optical port Ethernet card is generally called "Fiber Ethernet card", but also plugged in the server, but it is an Ethernet switch with a light port.
- iSCSI Network card: Internet Small computer System Interface, scientific name iSCSI HBA, Transport iSCSI protocol, interface type same as Ethernet card. ISCSI (Internet SCSI) is a set of commands that define the transport of SCSI protocols on a TCP/IP network. He extends the SCSI-defined initiator (initiator) and target-side (target) from the original SCSI bus connection to the Internet, breaking the limit of the storage distance in the SCSI definition.
The FC SAN and IP San are currently two popular SAN storage scenarios:
- When a SAN device is connected to the system, it is represented as one or more target IDs, and its logical allocation unit is LUN-LOGICAL unit number (logical unit).
- The IP San is also known as iSCSI (Internet Small computer System Interface).
The core of iSCSI technology is the transmission of SCSI protocols on TCP/IP networks, which is the encapsulation of SCSI packets with TCP/IP packets and iSCSI packets, which allows the SCSI commands and data to be transmitted on a common Ethernet network.
Device Mapper Multipath (Dm-multipath) and multipath devices
In the SAN (storage area Network), there are multiple I/O links between the server node and the storage column, for example, the host can be connected to one or more FC switches through one or more HBA cards, and the switch is connected to the two ports of the magnetic Consolidation column controller. Dm-multipath enables the multiple I/O path between the server and the storage controller to become a single device.
Dm-multipath Features:
- provides redundancy :Dm-multipath can achieve disaster transfer in active/passive mode. In active/passive mode, only half of the links are working, and if a part of the link (cable, switch, controller) fails, the Dm-multipath switches to the other half of the link.
- High Performance :Dm-multipath can also be configured as active/active mode, so I/O tasks are round-robin distributed across all the links. Through configuration, the Dm-multipath can also detect load conditions on the link and load balance dynamically.
I/O path: A physical san consisting of cables, switches, and controllers.
Dm-multipath is able to create a new device that consists of an I/O path aggregation. Without configuring Dm-multipath, a LUN from a disk array is mapped from the controller host port to the server and is identified as a standalone device in the operating system, which causes the same LUN to be mapped to a different device by the host port of the disk array to which the server is identified. As a solution,Dm-multipath provides a mechanism for logically managing I/O paths by creating a separate multipath device on the physical device, so that the disk array's LUNs are mapped from the controller host port to the server and identified as a multipath device in the operating system. without the use of Dm-multipath, each link from the server to the storage is recognized by the system as a separate device. Dm-multipath can create a single multipath device on top of these underlying devices, enabling the organization and management of these links.
each multipath device has a uniquely identified world Wide Identifier (Global identification number, WWID), and by default, the name of the Multipath device is set to its WWID. By repairing the user_friendly_names option parameter in the multipath.conf file, you can set the Multipath device alias to Mpath[n]. For example, the following configuration environment: a server with two HBAs is connected to a disk array controller with two host ports via an unused Zone FC switch (disk array has only one LUN) and can see four devices in the operating system:/DEV/SDA,/dev/sdb, Dev /SDC, and/DEV/SDD. By configuring the multipath.conf file, Dm-multipath will create a multipath device with Wwid, a multipath device controlled by Dm-multipath, and we can view multipath device files in three different directories:/dev/ Mapper/mpath[n],/dev/mpath/mpath[n];/dev/dm-[n].. Use the device names in the/dev/mapper directory to manage multipath devices, such as creating logical volumes, creating file systems, and so on, using these files when accessing multipath devices, such as creating LVM. The files in the/dev/mpath/directory are for easy viewing of all multipath devices in the same directory, do not use the devices in this directory to create logical volumes, create file systems, and so on. /dev/dm-n is only used for internal use of the system, never operate on these files.
Http://wenku.baidu.com/link?url=GKo5N0ZkASOvwdqyfN3gM_ Isyycz0mrrbsp0-2aolut8z01saos97najlyam48ydmur6q2o72jsx8fsyjcbcnr8of0dosdahsc-rbw3zofe
Http://blog.sina.com.cn/s/blog_623630d50101q2hg.html
Device Mapper Multipath (Dm-multipath) and multipath devices
Installation of Multipath tools. Now Linux through FC connection to storage, Fdisk-l appears four disk, is four path caused, want to use multi-path management software, but two multi-path software: 1, Device-mapper SuSE system comes with, has been installed. 2, Multipath-tools.
Zypper se multipath # Search can install package # Select an online installation linux-0k5g:~ # multipath-lJul 12 20:57:07 | DM multipath kernel driver not loaded
The SuSE system does not appear to have the service open by default, workaround reference: http://www.novell.com/support/kb/doc.php?id=3003090,
- Run
chkconfig Boot.multipath on
- Run
chkconfig MULTIPATHD on
- Reboot the server, or start the services manually:
- to Manua lly start Boot.multipath, run
/etc/init.d/boot.multipath start
- to Manually start MULTIPATHD, run
/ETC/INIT.D/MULTIPATHD start
chkconfig Boot.multipath on chkconfig MULTIPATHD on /etc/init.d/boot.multipath Start Creating multipath targets /etc/init.d/multipathd startstarting MULTIPATHD Don
Solve the problem.
Openfiler building Virtual storage and configuring the server side: http://blog.csdn.net/tianlesoftware/article/details/5973222
Installing Oracle 11gR2 RAC under VMware Linux