/Dev/sdxx is apparently in use by the system; will not make a filesystem here! Solution

Source: Internet
Author: User

A gb storage space is shared on the storage and mapped to the Linux system. The environment consists of two nodes.

 

I. Test 1: Mount directly

Format with fdisk as follows:

[Root @ Rac1 u01] # fdisk-l

......

Disk/dev/SDK: 536.8 GB, 536870912000 bytes

255 heads, 63 sectors/track, 65270 Cylinders

Units = cylinders of 16065*512 = 8225280 bytes

 

Device boot start end blocks ID system

/Dev/sdk1 1 65270 524281243 + 83 Linux

......

[Root @ rac1u01] #

 

However, an error is reported when you create a file system:

[Root @ Rac1 u01] # mkfs-T ext3/dev/sdk1

Mke2fs 1.39 (29-may-2006)

/Dev/sdk1 isapparently in use by the system; will not make a filesystem here!

 

Prompt/dev/sdk1 is in use. /Dev/sdk1 is being managed by DM, so an error is prompted when we create a file system. We can manually remove it to create a file system as follows:

 

[Root @ Rac1 u01] # DMSetup status

Mpath2: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 A 0

Mpath11p1: 0 1048562487 linear

Mpath9: 0 209715200 multipath 2 0 1 0 1 1 A0 1 0 A 0

Mpath8: 0 629145600 multipath 2 0 1 0 1 1 A0 1 0 A 0

Mpath7: 0 629145600 multipath 2 0 1 0 1 1 A0 1 0 8: 96 A 0

Mpath6: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8: 80 A 0

Mpath5: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 8: 64 A 0

Mpath11: 0 1048576000 multipath 2 0 1 0 1 1a 0 1 0 8: 160 A 0

Mpath4: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 A 0

Mpath10: 0 209715200 multipath 2 0 1 0 1 1a 0 1 0 A 0

Mpath3: 0 2097152 multipath 2 0 1 0 1 1 A 01 0 A 0

 

[Root @ Rac1 u01] # DMSetup remove_all

 

[Root @ Rac1 u01] # dmsetupstatus

No devices found

 

[Root @ Rac1 u01] # mkfs-text3/dev/sdk1

Mke2fs 1.39 (29-may-2006)

Filesystem label =

OS type: Linux

Block size = 4096 (log = 2)

Fragment size = 4096 (log = 2)

65536000 inodes, 131070310 Blocks

6553515 blocks (5.00%) reserved for thesuper user

First data block = 0

Maximum filesystem blocks = 4294967296

4000 block groups

32768 blocks per group, 32768 fragments pergroup

16384 inodes per group

Superblock backups stored on blocks:

32768,983 04, 163840,229 376, 294912,819 200, 884736,160 5632, 2654208,

4096000,796 2624, 11239424,204 80000, 23887872,716 63616, 78675968,

102400000

 

Writing inode tables: Done

Creating Journal (32768 blocks): Done

Writing superblocks and filesystemaccounting information: Done

 

This filesystem will be automaticallychecked every 36 mounts or

180 days, whichever comes first. Use tune2fs-C or-I to override.

 

-- The file system is successfully created.

 

-- Mount successful:

[Root @ Rac1 u01] # Mount/dev/sdk1/u01/backup

[Root @ Rac1 u01] # DF-lH

Filesystem size used avail use % mounted on

/Dev/sda3 59G 22g 35g 39%/

/Dev/sda1 996 M 51 m 894 M 6%/boot

Tmpfs 32G 0 32G 0%/dev/SHM

/Dev/sda4 145g 188 m 138g 1%/u01/Dave

/Dev/sdk1 493g 198 m 467g 1%/u01/backup

 

 

-- Modify the/etc/fstab file to enable automatic mounting upon startup:

[Root @ rac2 mapper] # vi/etc/fstab

 

Label = // ext3 defaults 1 1

Label =/boot ext3 defaults 1 2

Tmpfs/dev/SHM tmpfs defaults 0 0

Devpts/dev/PTS devpts gid = 5, mode = 620 0 0

Sysfs/sys sysfs defaults 0 0

Proc/proc defaults 0 0

Label = SWAP-sda2 swap defaults 0 0

/Dev/sdk1/u01/backup ext3 defaults 0 0

 

 

But after the restart test, the file cannot be mounted normally, and manual mounting will also fail.

 

Therefore, this solution does not work.

 

 

Additional content:

Device er is a ing framework from a logical device to a physical device provided in the Linux 2.6 kernel. Under this mechanism, you can easily develop storage resource management policies based on your needs, such as striping, image, and snapshot. popular Linux logical volume managers such as lvm2 (Linux Volume Manager 2 version), evms (enterprisevolume management system), and dmraid (device mapper raidtool) and so on. you can easily implement these features by developing a ing policy in the user space and writing a target driver plug-in to process specific IO requests as needed.

 

Device er mainly includes kernel space ing, device mapper library of user space, and DMSetup tool.

 

 

Ii. Experiment 2: using multipath

 

For instructions on Multipath configuration, refer:

Multipath for persistent Lun device names

Http://blog.csdn.net/tianlesoftware/article/details/5979061

 

-- Get wwid:

[Root @ Rac1 mapper] #/sbin/scsi_id-g-u-s/block/SDK

2017b11c00022bc0e000003e55105b786

 

-- Modify the multipath. conf file:

[Root @ Rac1 mapper] # vi/etc/multipath. conf

 

Multipaths {

Multipath {

Wwid 2017b11c00022bc0e000003e55105b786

Alias backup

Path_grouping_policy multibus

Path_checker readsector0

Path_selector "round-robin 0"

Failback Manual

Rr_weight priorities

No_path_retry 5

}

# Multipath {

# Wwid 1dec_____321816758474

# Alias red

#}

}

 

"/Etc/multipath. conf" 177l, 4832 cwritten

 

-- Restart multipath:

[Root @ Rac1 mapper] # service multipathdrestart

Stopping multipathd daemon: [OK]

Starting multipathd daemon: [OK]

 

-- Check the file:

[Root @ Rac1 mapper] # cd/dev/mapper/

[Root @ Rac1 mapper] # ll

Total 0

BRW-RW ---- 1 root disk253, 9 Feb 20 backup 35 backup

BRW-RW ---- 1 root disk253, 10 Feb 20 12:35 backupp1

CrW ------- 1 Root 10, 60 Feb 20 control 35 control

BRW-RW ---- 1 root disk 253, 8 Feb 20 mpath10

BRW-RW ---- 1 root disk 253, 0 Feb 20 mpath2

BRW-RW ---- 1 root disk 253, 1 Feb 20 mpath3

BRW-RW ---- 1 root disk 253, 2 Feb 20 mpath4

BRW-RW ---- 1 root disk 253, 3 Feb 20 mpath5

BRW-RW ---- 1 root disk 253, 4 Feb 20 mpath6

BRW-RW ---- 1 root disk 253, 5 Feb 20 mpath7

BRW-RW ---- 1 root disk 253, 6 Feb 20 mpath8

BRW-RW ---- 1 root disk 253, 7 Feb 20 mpath9

 

-- Mount file:

[Root @ Rac1 mapper] # Mount/dev/mapper/backupp1/u01/backup

 

-- Check the mount:

[Root @ Rac1 mapper] # DF-lH

Filesystem size used avail use % mounted on

/Dev/sda3 59G 22g 34g 39%/

/Dev/sda1 996 M 51 m 894 M 6%/boot

Tmpfs 32G 364 m 32G 2%/dev/SHM

/Dev/sda4 145g 188 m 138g 1%/u01/Dave

/Dev/mapper/backupp1 493g 198 m 467g 1%/u01/backup

 

 

After modifying the/etc/fstab file, restart the file to automatically mount the file. But there are two nodes, and the storage configuration is also shared. However, files created on node 1 cannot be identified on node 2. After testing, the files created on another node can be viewed only after re-mount.

 

 

The test procedure is as follows:

[Root @ Rac1 backup] # ll

Total 24

-RW-r -- 1 Root 0 Feb 20 :57 BL

Drwxr-XR-x 2 root Root 4096 Feb 20 :55 Dave

-RW-r -- 1 Root 5 Feb 20 12:55 DVD

Drwx ------ 2 root Root 16384 Feb 20 12: 10 lost + found

 

-- Create an orcl file:

[Root @ Rac1 backup] # Touch orcl

 

-- In the umount directory of Node 2:

[Root @ rac2 backup] # umount/u01/backup

Umount:/u01/backup: device is busy

Umount:/u01/backup: device is busy

[Root @ rac2 backup] # Fuser-km/u01/backup

/U01/backup: 9848c

 

[Root @ rac2 ~] # DF-lH

Filesystem size used avail use % mounted on

/Dev/sda3 70g 20G 46g 31%/

/Dev/sda1 996 M 51 m 894 M 6%/boot

Tmpfs 32G 364 m 32G 2%/dev/SHM

/Dev/mapper/backupp1 493g 198 m 467g 1%/u01/backup

[Root @ rac2 ~] # Umount/u01/backup

 

-- Confirm that umount is successful:

[Root @ rac2 ~] # DF-lH

Filesystem size used avail use % mounted on

/Dev/sda3 70g 20G 46g 31%/

/Dev/sda1 996 M 51 m 894 M 6%/boot

Tmpfs 32G 364 m 32G 2%/dev/SHM

 

-- Mount again:

[Root @ rac2 ~] # Mount/dev/mapper/backupp1/u01/backup

[Root @ rac2 ~] # Cd/u01/backup

[Root @ rac2 backup] # ll

Total 24

-RW-r -- 1 Root 0 Feb 20 :57 BL

Drwxr-XR-x 2 root Root 4096 Feb 20 :55 Dave

-RW-r -- 1 Root 5 Feb 20 12:55 DVD

Drwx ------ 2 root Root 16384 Feb 20 12: 10 lost + found

-RW-r -- 1 Root 0 Feb 20 14:34 orcl

[Root @ rac2 backup] #

 

This time we can see the files created on node 1 on node 2.

 

 

 

 

Bytes ---------------------------------------------------------------------------------------

All rights reserved. reprinted articles are allowed, but source addresses must be indicated by links. Otherwise, the documents will be held legally responsible!

Skype: tianlesoftware

QQ: tianlesoftware@gmail.com

Email: tianlesoftware@gmail.com

Blog: http://blog.csdn.net/tianlesoftware

WEAVER: http://weibo.com/tianlesoftware

Twitter: http://twitter.com/tianlesoftware

Facebook: http://www.facebook.com/tianlesoftware

LinkedIn: http://cn.linkedin.com/in/tianlesoftware

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.