In this blog, I want to talk about the principles and implementations of SAN, do you still remember what the back-end storage technology is when I talk about HA and LB clusters in the previous section? NAS network attached storage ), of course, the efficiency of this technology is relatively low. If enterprises have high requirements on data storage speed, this technology will definitely not meet the requirements, so it will lead to the SAN region Network) it provides block-level file sharing. The front server can directly use the back storage medium as a local disk, the speed of adopting the fiber channel technology is quite fast. Generally, IDCs use this technology), disk arrays, tape drives, and optical cabinets (en.
ImplementationSANNetwork Type
Readers think that in most physical storage, SCSI interfaces are used to communicate between servers and disk drive devices. Because their bus topology System Bus) is not applicable to network environments, they do not use underlying physical connection media such as connection cables ). Comparatively, they use other underlying communication protocols as the image layer to achieve network connection:
1. Fiber Channel Protocol (FCP, Fiber Channel Protocol), the most common method to map a SCSI connection FC-SAN through Fiber Channel );
This connection method has the highest efficiency, but the highest cost at the same time. It is mainly used on hardware devices, such as optical fiber switches. Generally, IDCs use this technology.
2. iSCSI, IP-SAN for SCSI ing based on TCP/IP );
This kind of efficiency is much lower, but the cost is much lower. what we want to achieve today is this kind of efficiency.
3. HyperSCSI, Ethernet-based SCSI ing;
4. ATA over Ethernet, ATA ing Based on Ethernet;
5. FICON connected Using Fiber Channel, common and mainframe environment;
6. Fiber Channel over Ethernet (FCoE), Ethernet-based FC protocol;
7. iSCSI Extensions for RDMA (iSER), iSCSI connection based on InfiniBand (IB;
8. iFCP [1] Or SANoIP [2] Optical Fiber Channel Protocol (FCP) based on the IP network ).
Realization of IP-SAN
1. Network Topology
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image002 "border =" 0 "alt =" clip_image002 "src =" http://img1.51cto.com/attachment/201306/8/5200614_1370682714Iwmp.jpg "height =" 486 "/>
2. Specific implementation
The iSCSI technology is used here. I will briefly introduce iSCSI. iSCSI uses the TCP/IP port 860 and 3260 as the communication channel. The iSCSI protocol is used between two computers to exchange SCSI commands, so that the computer can simulate SAN as a local storage device through a high-speed LAN collection line.
ISCSI uses the TCP/IP protocol, generally using TCP ports 860 and 3260 ). In essence, iSCSI allows two hosts to negotiate with each other through an IP network and then exchange SCSI commands. In this way, iSCSI simulates a common high-performance local storage bus with a wide area network, thus creating a storage lan san ). Unlike some SAN protocols, iSCSI does not require dedicated cables; it can run on existing swap and IP infrastructure.
In use, the front-end server acts as the initiator), and the back-end storage acts as the target. The front-end server finds the back-end storage and then uses
Software requirements:
The OS here is a linux 5.4 CD with related software packages.
1. backend storage:
Scsi-target-utils-0.0-5.20080917snap.el5.i386.rpm // simulate a back-end storage server as a target
Perl-Config-General-2.40-1.el5.noarch.rpm // dependency package for target
2. Front-End Server:
Iscsi-initiator-utils-6.2.0.871-0.10.el5.i386.rpm // change front-end service to initiator
Backend storage Configuration:
Step 1. install the software package and start the service
# Rpm-ivh perl-Config-General-2.40-1.el5.noarch.rpm
# Rpm-ivh scsi-target-utils-0.0-5.20080917snap.el5.i386.rpm
After the installation is complete, check which files are generated by the software package.
# Rpm-ql scsi-target-utils
/Etc/rc. d/init. d/tgtd // service script
/Etc/sysconfig/tgtd
/Etc/tgt/targets. conf // configuration file
/Usr/sbin/tgt-admin
/Usr/sbin/tgt-setup-lun
/Usr/sbin/tgtadm // This is commonly used. Storage is used as a target resource so that the front-end server can find
/Usr/sbin/tgtd
/Usr/share/doc/scsi-target-utils-0.0
/Usr/share/doc/scsi-target-utils-0.0/README
/Usr/share/doc/scsi-target-utils-0.0/README. iscsi
/Usr/share/doc/scsi-target-utils-0.0/README. iser
/Usr/share/doc/scsi-target-utils-0.0/README. lu_configuration
/Usr/share/doc/scsi-target-utils-0.0/README. mmc
/Usr/share/man/man8/tgt-admin.8.gz
/Usr/share/man/man8/tgt-setup-lun.8.gz
/Usr/share/man/man8/tgtadm.8.gz
Start the service
# Service tgtd start
Step 2. Create a target resource
Tgtadm syntax
-- Lld [driver] -- op new -- mode target -- tid = [id] -- targetname [name] // Add resources
-- Lld [driver] -- op delete -- mode target -- tid = [id] // delete a resource
-- Lld [driver] -- op show -- mode target // view resources
-- Lld [driver] -- op new -- mode = logicalunit -- tid = [id] -- lun = [lun] -- backing-store [path] // bind a disk
-- Lld [driver] -- op bind -- mode = target -- tid = [id] -- initiator-address = [address] // specify which IP servers can access target resources
# Tgtadm -- lld iscsi -- op new -- mode target -- tid = 1 -- targetname iqn.2013-06.com. back: disk
Here, the/dev/sdb1 can be used as the target disk.
# Tgtadm -- lld iscsi -- op new -- mode = logicalunit -- tid = 1 -- lun = 1 -- backing-store/dev/sdb1
# Tgtadm -- lld iscsi -- op show -- mode target // view Resource Information
Target 1: iqn.2013-06.com. back: disk
System information:
Driver: iscsi
State: ready // The status is ready.
I _T nexus information:
LUN information:
LUN: 0
Type: controller
Scsi id: deadbeaf1: 0
Scsi sn: beaf10
Size: 0 MB
Online: Yes
Removable media: No
Backing store: No backing store
LUN: 1 // here is the disk information added to the editor.
Type: disk
Scsi id: deadbeaf1: 1
Scsi sn: beaf11
Size: 10734 MB
Online: Yes
Removable media: No
Backing store:/dev/sdb1
Account information:
ACL information:
Step 3. The creation is complete, and security needs to be considered. Here, the IP address is used as the limit.
# Tgtadm -- lld iscsi -- op bind -- mode = target -- tid = 1 -- initiator-address = 192.168.30.0/24 // only 30 network segments are allowed
Check the following information again.
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image004 "border =" 0 "alt =" clip_image004 "src =" http://www.bkjia.com/uploads/allimg/131228/01203962S-1.jpg "height =" 35 "/>
Of course, the above configuration can be directly written in the configuration document for future use.
Add the following content to/etc/tgt/targets. conf.
<Target iqn.2013-06.com. back: disk>
List of files to export as LUNs
Backing-store/dev/sdb1
# Authentication:
# If no "incominguser" is specified, it is not used
# Incominguser backup secretpass12
# Access control:
# Defaults to ALL if no "initiator-address" is specified
Initiator-address 192.168.30.0/24
</Target>
Step 4. Mutual authentication based on CHAP login authentication)
Still Modify/etc/tgt/targets. conf, find # Authentication: Add the following two lines
Incominguser web1totarget 123456 // username and password for logging on to web1
Incominguser web2totarget 123456 // username and password for web2 Login
Outgoinguser targettoweb 654321 // target authenticates the web Server
Query the target resource status again.
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image006 "border =" 0 "alt =" clip_image006 "src =" http://www.bkjia.com/uploads/allimg/131228/01203a025-2.jpg "height =" 540 "/>
Frontend Server Configuration
Take web1 as an Example
Step 1. install the software package and start the service
# Yum install iscsi-initiator-utils
Start the service
# Service iscsi start
Step 2. initialize the initiator name and CHAP authentication Configuration
# Vim/etc/iscsi/initiatorname. iscsi
InitiatorName = iqn.2013-06.com. web1
Chap authentication configuration)
# Vim/etc/iscsi/iscsid. conf
Node. session. auth. authmethod = CHAP
Node. session. auth. username = web1totarget
Node. session. auth. password = 123456
Node. session. auth. username_in = targettoweb
Node. session. auth. password_in = 654321
Step 3. Initiate a connection
# Iscsiadm -- mode discovery -- type sendtargets -- portal 192.168.30.3
Step 4. log on
# Iscsiadm -- mode node -- targetname iqn.2013-06.com. back: disk -- portal 192.168.30.3 -- login
Logging in to [iface: default, target: iqn.2013-06.com. back: disk, portal: 192.168.30.3, 3260]
Login to [iface: default, target: iqn.2013-06.com. back: disk, portal: 192.168.30.3, 3260]: successful
If you view the information in the backend storage, you will see
# Tgtadm -- lld iscsi -- op show -- mode = target
Target 1: iqn.2013-06.com. back: disk
System information:
Driver: iscsi
State: ready
I _T nexus information:
I _T nexus: 1
Initiator: iqn.2013-06.com. web2
Connection: 0
IP Address: 192.168.30.2
I _T nexus: 2
Initiator: iqn.2013-06.com. web1
Connection: 0
IP Address: 192.168.30.1
Step 5. view the disk list on the front-end server
Readers will find that there is another disk.
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image008 "border =" 0 "alt =" clip_image008 "src =" http://www.bkjia.com/uploads/allimg/131228/0120391E0-3.jpg "height =" 295 "/>
6. Mount related formatting partitions (using the ext3 File System)
Switch to the Mount directory for file operations
# Mount/dev/sdb1/mnt/back/
# Cd/mnt/back/
# Touch f1
# Ll
Total 16
-Rw-r -- 1 root 0 06-07 21:39 f1
Drwx ------ 2 root 16384 06-07 21:37 lost + found
However, there is a problem with this method. The ext3 file system does not have a lock mechanism or push mechanism, which means that if the new files created by both websites cannot be seen to each other at the same time, in order to solve this problem, we need to use a new file system. Here we will use the oracle File System OCFS2.
7. Implementation of OCFS2 File System
Software requirements:
Ocfs2-2.6.18-164.el5-1.4.7-1.el5.i686.rpm
Ocfs2-tools-1.4.4-1.el5.i386.rpm
Ocfs2console-1.4.4-1.el5.i386.rpm
Here, we will make a configuration on the web1 server as an example.
Step 1: install the software package:
# Rpm-ivh ocfs2 * // also needs to be installed on the web2 Server
Step 2: Modify the hosts file and hostname
# Vim/etc/hosts
192.168.30.1 reserver1.com
192.168.30.2 reserver2.com
# Vim/etc/sysconfig/network
HOSTNAME = reserver1.com // The web2 host name is reserver2.com.
The hosts files of the two web servers are consistent. Here, you can use scp to pass the hosts file of web1 to web2.
# Scp/etc/hosts 192.168.30.2:/etc/
Step 3: Configure cluster nodes
# O2cb_ctl-C-n ocfs2-t cluster-I // create a cluster named ocfs2
# O2cb_ctl-C-n reserver1.com-t node-a number = 0-a ip_address = 192.168.30.1-a ip_port = 7777-a cluster = ocfs2 // Add node web1 to the cluster)
# O2cb_ctl-C-n reserver2.com-t node-a number = 1-a ip_address = 192.168.30.2-a ip_port = 7777-a cluster = ocfs2 // Add node web2 to the cluster)
In this way, the cluster configuration file cluster. conf will be generated in the directory of/etc/ocfs2/, and the same small series will be transmitted through scp.
# Scp/etc/ocfs2/cluster. conf 192.168.30.2:/etc/ocfs2 // note that the/etc/ocfs2 directory of web2 must be created in advance.
Step 4: Start the cluster service
Remember to start both servers.
# Service o2cb enable // check whether the configuration is valid
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at/sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at/dlm: OK
Starting O2CB cluster ocfs2: OK
# Service o2cb start // start the service
Cluster ocfs2 already online
# Service o2cb status // view service status web1)
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
Step 5: Format and mount the File System
Format the file system first. Here, the-N parameter specifies the maximum number of connected nodes.
# Mkfs-t ocfs2-N 2/dev/sdb1 // you only need to do this on one server.
Mount
# Mount/dev/sdb1/mnt/back // mount the job. Both servers must be mounted.
Check the cluster status on both sides.
# Service o2cb status // web1 status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active // activation status
# Service o2cb status // web2 status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Active // activation status
Step 6: Check whether push and lock mechanisms are available
Create a file on the web1 server to see if it can be seen on web2.
# Cd/mnt/back/
# Touch web1
Query on web2
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image010 "border =" 0 "alt =" clip_image010 "src =" http://www.bkjia.com/uploads/allimg/131228/0120394K0-4.jpg "height =" 100 "/>
Edit the file web1 on web1 and web2.
[Root @ reserver1 back] # vim web1
[Root @ reserver2 back] # vim web1
650) this. width = 650; "style =" border-bottom: 0px; border-left: 0px; border-top: 0px; border-right: 0px "title =" clip_image012 "border =" 0 "alt =" clip_image012 "src =" http://www.bkjia.com/uploads/allimg/131228/0120392J5-5.jpg "height =" 506 "/>
Okay, so far, let's make up what I want to do. If you feel useful, let's do it on your own .......
This article is from the post-90 blog, please be sure to keep this source http://wnqcmq.blog.51cto.com/5200614/1219054