/dev/sdxx is apparently in with the system; Would not make a filesystem here! Solving method

Source: Internet
Author: User


A 500G space is shared on the storage, mapped to the Linux system offering, and the environment is made up of 2 nodes.

one. Test one: Direct Mount

After the format of Fdisk is as follows:

[Root@rac1 u01]# Fdisk-l

......

disk/dev/sdk:536.8 GB, 536870912000 bytes

255 heads, Sectors/track, 65270cylinders

Units = Cylinders of 16065 * 8225280bytes

Device Boot Start End Blocks Id System

/DEV/SDK1 1 65270 524281243+ Linux

......

[root@rac1u01]#

But creating a file system times is wrong:

[Root@rac1 u01]# mkfs-t EXT3/DEV/SDK1

MKE2FS 1.39 (29-may-2006)

/dev/sdk1 isapparently in with the system; Would not make a filesystem here!

Hint/DEV/SDK1 is being used. /DEV/SDK1 is being DM management, so we create the file system prompts the error, we manually removed, you can create a normal file system, the operation is as follows:

[Root@rac1 u01]# dmsetup Status

mpath2:0 2097152 multipath 2 0 1 0 1 1 A 0 8:16 a 0

mpath11p1:0 1048562487 Linear

mpath9:0 209715200 multipath 2 0 1 0 1 1 A0 1 0 8:128 A 0

mpath8:0 629145600 multipath 2 0 1 0 1 1 A0 1 0 8:112 A 0

mpath7:0 629145600 multipath 2 0 1 0 1 1 A0 1 0 8:96 A 0

mpath6:0 2097152 multipath 2 0 1 0 1 1 A 0 8:80 a 0

mpath5:0 2097152 multipath 2 0 1 0 1 1 A 0 8:64 a 0

mpath11:0 1048576000 multipath 2 0 1 0 1 1 A 0 1 0 8:160 a 0

mpath4:0 2097152 multipath 2 0 1 0 1 1 A 0 8:48 a 0

mpath10:0 209715200 multipath 2 0 1 0 1 1 A 0 1 0 8:144 a 0

mpath3:0 2097152 multipath 2 0 1 0 1 1 A 0 8:32 a 0

[Root@rac1 u01]# Dmsetup Remove_all

[Root@rac1 u01]# Dmsetupstatus

No devices found

[Root@rac1 u01]# Mkfs-text3/dev/sdk1

MKE2FS 1.39 (29-may-2006)

FileSystem label=

OS Type:linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

65536000 inodes, 131070310 blocks

6553515 blocks (5.00%) reserved for thesuper user

The Data block=0

Maximum filesystem blocks=4294967296

4000 block groups

32768 blocks per group, 32768 fragments Pergroup

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,

102400000

Writing Inode Tables:done

Creating journal (32768 blocks): Done

Writing superblocks and Filesystemaccounting Information:done

This filesystem would be automaticallychecked every mounts or

180 days, whichever comes. Use Tune2fs-c or-i to override.

--The file system was created successfully.

--mount success:

[Root@rac1 u01]# Mount/dev/sdk1/u01/backup

[Root@rac1 u01]# DF-LH

FileSystem Size Used Avail use% mounted on

/dev/sda3 59G 22G 35G 39%/

/DEV/SDA1 996M 51M 894M 6%/boot

Tmpfs 32G 0 32G 0%/dev/shm

/DEV/SDA4 145G 188M 138G 1%/u01/dave

/dev/sdk1 493G 198M 467G 1%/u01/backup

--Modify the/etc/fstab file, let the boot automatically mount:

[Root@rac2 mapper]# Vi/etc/fstab

label=//ext3 Defaults 1 1

Label=/boot/boot ext3 Defaults 1 2

TMPFS/DEV/SHM TMPFS Defaults 0 0

Devpts/dev/pts devpts gid=5,mode=620 0 0

Sysfs/sys Sysfs Defaults 0 0

PROC/PROC proc Defaults 0 0

Label=swap-sda2 Swap Defaults 0 0

/dev/sdk1/u01/backup ext3 defaults 0 0

However, after the reboot test, the file can not be mounted properly, manual mount will also fail.

Therefore, this solution is not possible.

Supplementary content:

Device Mapper is a mapping framework mechanism from logical devices to physical devices in the Linux 2.6 kernel, under which users can easily develop management strategies for storage resources according to their needs, such as stripe, mirroring, snapshots, etc. The current more popular Linux logical Volume manager such as LVM2 (Linux Volume Manager 2 version), EVMS (Enterprisevolume Management System), Dmraid (Device Mapper Raidtool) are based on this mechanism. As long as the user in the user space to set a good mapping strategy, according to their own needs to write processing specific IO request Target driver plug-in, you can easily implement these features.

Device Mapper mainly contains mapping of kernel space and Device Mapper Library and Dmsetup tool of user space.

two. Experiment two: Using Multipath

For a description of the configuration of Multipath, refer to:

Multipath Implement LUN device name persistence

http://www.cndba.cn/Dave/article/725

--Get Wwid:

[Root@rac1 mapper]#/sbin/scsi_id-g-U-S/BLOCK/SDK

3690b11c00022bc0e000003e55105b786

--Modify the multipath.conf file:

[Root@rac1 mapper]# vi/etc/multipath.conf

multipaths {

multipath {

Wwid 3690b11c00022bc0e000003e55105b786

Alias backup

Path_grouping_policy Multibus

Path_checker Readsector0

Path_selector "Round-robin 0"

failback Manual

Rr_weight Priorities

No_path_retry 5

}

# multipath {

# Wwid 1dec_____321816758474

# alias Red

#      }

}

"/etc/multipath.conf" 177L, 4832Cwritten

--Restart Multipath:

[Root@rac1 mapper]# Service Multipathdrestart

Stopping MULTIPATHD daemon: [OK]

Starting MULTIPATHD daemon: [OK]

--Check files:

[Root@rac1 mapper]# cd/dev/mapper/

[Root@rac1 mapper]# LL

Total 0

BRW-RW----1 root disk253, 9 Feb 12:35 Backup

BRW-RW----1 root disk253, Feb 12:35 backupp1

CRW-------1 root Feb 12:35 control

BRW-RW----1 root disk 253, 8 Feb 12:35 mpath10

BRW-RW----1 root disk 253, 0 Feb 12:35 mpath2

BRW-RW----1 root disk 253, 1 Feb 12:35 Mpath3

BRW-RW----1 root disk 253, 2 Feb 12:35 Mpath4

BRW-RW----1 root disk 253, 3 Feb 12:35 Mpath5

BRW-RW----1 root disk 253, 4 Feb 12:35 Mpath6

BRW-RW----1 root disk 253, 5 Feb 12:35 Mpath7

BRW-RW----1 root disk 253, 6 Feb 12:35 Mpath8

BRW-RW----1 root disk 253, 7 Feb 12:35 mpath9

--mount file:

[Root@rac1 mapper]# Mount/dev/mapper/backupp1/u01/backup

--Check Mount:

[Root@rac1 mapper]# DF-LH

FileSystem Size Used Avail use% mounted on

/dev/sda3 59G 22G 34G 39%/

/DEV/SDA1 996M 51M 894M 6%/boot

Tmpfs 32G 364M 32G 2%/DEV/SHM

/DEV/SDA4 145G 188M 138G 1%/u01/dave

/dev/mapper/backupp1 493G 198M 467G 1%/u01/backup

After you modify the/etc/fstab file, the reboot is automatically mounted as expected. But here are 2 nodes, and the storage configuration is also shared. However, the files created in Node 1 are not recognized on Node 2, and after testing, the files created by another node can only be seen after the mount has been reinstalled.

The test steps are as follows:

[Root@rac1 backup]# LL

Total 24

-rw-r--r--1 root 0 Feb 12:57 bl

Drwxr-xr-x 2 root root 4096 Feb 12:55 Dave

-rw-r--r--1 root 5 Feb 12:55 DVD

DRWX------2 root 16384 Feb 12:10lost+found

--Create File ORCL:

[Root@rac1 backup]# Touch ORCL

--In Node 2 umount directory:

[Root@rac2 backup]# Umount/u01/backup

Umount:/u01/backup:device is busy

Umount:/u01/backup:device is busy

[Root@rac2 backup]# Fuser-km/u01/backup

/u01/backup:9848c

[Root@rac2 ~]# DF-LH

FileSystem Size Used Avail use% mounted on

/dev/sda3 70G 20G 46G 31%/

/DEV/SDA1 996M 51M 894M 6%/boot

Tmpfs 32G 364M 32G 2%/DEV/SHM

/dev/mapper/backupp1 493G 198M 467G 1%/u01/backup

[Root@rac2 ~]# Umount/u01/backup

--Confirm Umount success:

[Root@rac2 ~]# DF-LH

FileSystem Size Used Avail use% mounted on

/dev/sda3 70G 20G 46G 31%/

/DEV/SDA1 996M 51M 894M 6%/boot

Tmpfs 32G 364M 32G 2%/DEV/SHM

--Mount again:

[Root@rac2 ~]# Mount/dev/mapper/backupp1/u01/backup

[Root@rac2 ~]# Cd/u01/backup

[Root@rac2 backup]# LL

Total 24

-rw-r--r--1 root 0 Feb 12:57 bl

Drwxr-xr-x 2 root root 4096 Feb 12:55 Dave

-rw-r--r--1 root 5 Feb 12:55 DVD

DRWX------2 root 16384 Feb 12:10lost+found

-rw-r--r--1 root 0 Feb 14:34 ORCL

[Root@rac2 backup]#

This time we saw the file created on Node 1 on Node 2.

---------------------------------------------------------------------------------------

Copyright, the article is allowed to reprint, but must be linked to the source address, otherwise held legal responsibility!

qq:492913789

Email:ahdba@qq.com

Blog:http://www.cndba.cn/dave

Weibo:http://weibo.com/tianlesoftware

Twitter:http://twitter.com/tianlesoftware

Facebook:http://www.facebook.com/tianlesoftware

Linkedin:http://cn.linkedin.com/in/tianlesoftware

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.