RHEL7 disk partitioning, formatting, LVM management, and ISCSI network storage services, rhel7lvm

Source: Internet
Author: User

RHEL7 disk partitioning, formatting, LVM management, and ISCSI network storage services, rhel7lvm

How to partition and format disks and how to configure LVM in RHEL7 is not much different from previous versions of RHEL. You can use disk tools (run on a graphic desktop) or command line tools (such: fdisk, gdisk, and parted) Manage hard disk devices. Fdisk can be configured with MBR format, gdisk can be configured with gpt format, and parted can be selected by yourself.

Traditional hard disk partitions are in MBR format. MBR partitions are located in 0 sectors, with a total of 512 bytes. The first 446 bytes are grub boot programs, and 64 bytes in the middle are partitioned tables. Each partition must be represented in 16 bytes, therefore, a primary partition and an extended partition can have only four partitions. More than four partitions can only be expressed by logical partitions. The size of each partition cannot exceed 2 TB. The last two bytes of MBR are the ending symbols.

The GPT format breaks the MBR restrictions and can set up to 128 partitions. The partition size varies according to the operating system, but both exceed the limit of 2 TB space. Supports up to 18EB (1EB = 1024PB, 1PB = 1024 TB) volume size, allows the primary disk partition table and backup disk partition table for redundancy, and supports unique disk and partition ID (GUID ).

Unlike the MBR partition disk, GPT partition information is in the partition, rather than in the primary boot sector like MBR. To prevent GPT from being compromised by MBR disk management software, GPT creates an MBR Partition Table for protecting partitions in the primary Boot Sector. The type of this partition is 0xEE, the size of the protected partition is 128 MB for Windows, 200 MB for Mac OS x, and GPT for Windows Disk Manager, the MBR disk management software can regard GPT as a partition in an unknown format, rather than incorrectly treating it as an unpartitioned disk.

In an MBR hard disk, partition information is directly stored in the Master Boot Record (MBR) (the boot program of the system is stored in the Master Boot Record ). However, in a GPT hard disk, partition table information is stored in the GPT header, but for compatibility reasons, the first sector of the hard disk is still used as MBR, and then the GPT header.

The structure of GPT is as follows:

First look at the current hard disk information:

Fdisk-l

You can view the current partition in the/proc/partitions file.

Cat/proc/partitions

First look at the MBR format partition, fdisk options are as follows

Fdisk/dev/sdb

Enter n to create a new MBR partition, and enter p to display the current partition status.

Repeat n to add other partitions.

Note: An MBR disk can create up to four primary partitions or three primary partitions and one extended partition. You can create several logical partitions in the extended partition.

Note that the id represents the purpose of the disk and can be changed through t.

View partition records

Gdisk and fdisk are very similar.

Gdisk/dev/sdb

When creating a new partition, You can see 128 partitions.

Parted, which is more flexible than the first two. You can set the MBR or GPT format and partition by yourself.

Mklabel msdos can be set to MBR format, and then mkpart can be used to divide partitions.

Msdos is set to MBR format and gpt is set to GPT format

Primary represents the primary partition, extended represents the extended partition, and logical represents the logical partition.

Set number flag state is used to set the partition usage, such as boot, lvm, and raid. State: on/off indicates enabled or disabled.

The parted tool does not need to be saved after partitioning. Enter q to exit.

After partitioning, you must format the partition before using it. You can use mkfs/mkswap to format the file system.

# Mkfs. xfs/dev/partition device name or

# Mkfs-t xfs/dev/partition device name

You can modify fstab for automatic loading.

Vim/etc/fstab

Test whether automatic mounting is available

View the mounted device-T through df-h to display the file system of the device.

Df-Ti can view the number of I nodes of the file (I node is the number of sub-files that can be created). The storage of the file depends on the file capacity and the number of files that can be created.

Some mount points have long paths and are automatically displayed in two rows. You can force one row to be displayed in-P mode.

Like a process with a pid, a user has a uid, and each file system has its own id, which is called a uuid, but not every partition has a id. If a partition does not have a file system, therefore, this partition does not have a uuid.

You can view it through blkid (block id. Note that uuid indicates the file system rather than partitions. The advantage of uuid is that you can mount the system using the unique value of uuid, which can avoid dislocation caused by hard disk deletion, and sda6 is changed to sda5.

Blkid

You can use xfs_admin-U to manually change the uuid of the file system.

[Root @ kang ~] # Umount/dev/sdb1

[Root @ kang ~] # Uuidgen

B84c99ee-613f-483f-954d-16dc4b38d9e3

[Root @ kang ~] # Xfs_admin-U b84c99ee-613f-483f-954d-16dc4b38d9e3/dev/sdb1

Clearing log and setting UUID

Writing all SBs

New UUID = b84c99ee-613f-483f-954d-16dc4b38d9e3

Additional: in the directory, you can use ls-ld to view the attributes of the directory, and ls-la to view the attributes of the content. However,-ld displays a directory with a size of only 4 kb, this is only the size of the Directory itself. To view the entire size of the Directory and its content, you can view it through du. If you only want to view the final result, use-s (summary.

Next, let's take a look at how to manually create swap partitions. Swap is similar to Windows virtual memory/page file. When the memory is insufficient, data is stored in swap.

You can use either of the following methods:

The first method is to use a single partition as the swap.

Cat/proc/swaps

Create a partition such as/dev/sdb2 and change the partition ID to 82.

Run the partx-a/dev/sdb command to make the partition modification take effect.

Create a swap file system on the partition

Mkswap/dev/sdb2

Modify fstab for automatic loading

The second method is to create a file block and use the space occupied by the file as swap. Be sure to modify the File Permission.

The extended degree of common partitions is not high. Once the partition formatting is completed, it is difficult to increase or reduce the partition size flexibly.

To solve this problem, you can use lvm (logical volume). The basic process is to initialize a physical disk or partition as a physical volume (pv), and then add pv to the vg (volume group ), finally, partition logical partitions (lvm) on vg. lvm can be formatted and mounted as common partitions.

Create a pv for the prepared disk or partition:

Pvcreate/dev/sdc [1, 123]

You can run pvdisplay to view the details of pv and pvremove to delete pv.

After creating a pv (physical volume), you need to create a vg (volume group) and then add the pv to the vg.

You can view the specific information through vgdisplay. Note that the pe size is 4 MB, which is the smallest unit of increase or decrease.

Note:

When creating VG: The-s option is used to specify the size of the pe block (physical expansion unit) during creation. The default value is 4 MB.

# Vgcreate vg00-s 8 M/dev/sdc [123]

We can continue to add new partitions to vg.

If sdd1 is not converted to pv in advance, it is directly added to vg. However, once it is added, it is automatically initialized to pv.

You can add or reduce pv. # Vgreduce vg00/dev/sdd1

VG is ready. You can create LVM.

Note that his size is actually 112 M, because the pe size is 4 M, this 4 M is the smallest unit, and cannot be broken, so 28 pe is 112 M

Note: If the value is large, the size can be directly specified. If the value is small, the value of the number of PES is specified.

You can also set the percentage of the remaining space.

[Root @ kang ~] # Lvcreate-l 10% free-n lv01 vg00

Delete logical volume

[Root @ kang ~] # Lvremove/dev/vg00/lv01

You can format and mount a logical volume as a normal partition.

[Root @ kang ~] # Mkfs. xfs/dev/vg00/lv00

Modify the/etc/fstab file to enable automatic mounting upon startup.

Vim/etc/fstab

/Dev/vg00/lv00/aa xfs defaults 0 0

Expand a logical volume to 300 mb. First, make sure that the volume group has a free time greater than MB.

Vgdsiplay vg00

Run lvextend to expand the logical volume size

[Root @ kang ~] # Lvextend-L + 200 M/dev/vg00/lv00

Note that the file system of the logical volume is still memory m, and we need to fill in the blank space of the file system.

RHEL7 can use xfs_growfs to expand the xfs file system, or you can directly use resize2fs to process devices.

Note that the xfs system can only grow and cannot be reduced! Therefore, if you need to reduce LVM, the partition can only use ext4.

[Root @ kang ~] # Xfs_growfs/dev/vg00/lv00

Run df-hT to view the size of the extended File System

Logical volume Snapshot

LVM provides a very second device, which is snaphot. Allows administrators to create a new block device that provides a precise logical volume copy at a certain point in time, snapshots provide static views of the original volume. LVM snapshots record changes in the file system to a snapshot partition. Therefore, when you create a snapshot partition, you do not need to use a partition of the same size as the partition where you are creating the snapshot. The size of the required space depends on the usage of the snapshot. Therefore, there is no way to set this size. If the snapshot size is equal to the size of the original volume, the snapshot will always be available.

A snapshot is a special logical volume that can only be used as a snapshot of a logical volume. Logical volume snapshots and logical volumes that require snapshots must be in the same volume group (snapshots are saved in the current usage status, so snapshots must be larger than the current usage ).

Now there is a logical volume/dev/vg00/lv00 in our system. We use lvdisplay to query this logical volume.

The size of the logical volume/dev/vg00/lv00 is 309 MB. Mount the logical volume/dev/vg00/lv00 to/aa.

Below. Copy some data to/bb. Just wait for the lab

Now we will take a snapshot of the logical volume/dev/vg00/lv00 (note that the snapshot should be in the same logical volume and size)

[Root @ kang ~] # Lvcreate -- size 300 M -- snapshot -- name lvsp00/dev/vg00/lv00

Logical volume "lvsp00" created.

Run lvscan to view the created logical volume Snapshot

We can see that/dev/vg00/lv00 is the original logical volume, while/dev/vg00/lvsp00 is the snapshot.

Run the lvdisplay or lvs command to view the logic information.

The logical volume snapshot is successfully created.

Note: After the snapshot volume is created, it does not need to be formatted or mounted. An error message is displayed during formatting or mounting.

Simulate deleting data from the original logical volume

How to restore the data of the original logical volume? There are two ways to restore Deleted Data

Method 1: remove the original logical volume first

# Umount/dev/vg00/lv00

Then mount the logical volume snapshot.

# Mount/dev/vg00/lvsp00/aa

You can access the data normally.

Method 2: You can use lvconvert to re-write the snapshot content back to the original lvm (the snapshot will disappear after being overwritten)

First remove the original logical volume

# Umount/dev/vg00/lv00

Execute lvconvert to merge the snapshot data to the original logical volume

# Lvconvert -- merge/dev/vg00/lvsp00

Mount the original logical volume to check whether the data has been restored successfully.

Note: When the data in the original logical volume is deleted and the data in the logical volume snapshot is still there, you can use the snapshot to restore the data. However, when we add data to the logical volume, the snapshot will not change, but this file will not exist. Because the snapshot only backs up the current logical volume for an instant.

Use ssm (system Storage Manager) for logical management,

The logical volume manager (LVM) is an extremely flexible disk management tool that allows you to create Logical Disk volumes from multiple physical hardware drivers and resize them without downtime. The latest version of centos7/RHEL7 now comes with the system Storage Manager (also called ssm), which is a unified command line interface developed by RedHat to manage a variety of storage devices. Currently, there are three types of volume management backends available for ssm: LVM, Btrfs, and Crypt

Prepare ssm. On Centos7/RHEL7, you must first install the system Storage Manager. You can install it using rpm or yum

[Root @ kang ~] # Cd/media/Packages/

[Root @ kang Packages] # rpm-ivh system-storage-manager-0.4-5.el7.noarch.rpm

First, check the available hard disk and LVM volume information. The following command displays information about existing disk storage devices, storage pools, LVM volumes, and storage snapshots.

# Ssm list

In this example, there are four physical devices ("/dev/sda", "/dev/sdb", "/dev/sdc", "/dev/sdd "), two storage pools ("centos" and "vg00"), and three LVM volumes ("/dev/centos/root ", "/dev/centos/swap", "/dev/centos/home "), two LVM volumes created in the storage pool vg00 ("/dev/vg00/lv00" and "/dev/vg00/lv01 ").

The following describes how to create and manage logical volumes and logical volume snapshots through ssm.

Add at least one disk and execute the ssm command to display information about the existing disk storage devices, storage pools, and LVM volumes.

Two Idle disks (sde and sdf) are available)

Create a new LVM volume/pool

In this example, you may want to see how to create a new storage pool and a new LVM volume on a physical disk drive. If you use a traditional LVM tool, the entire process is quite complex. You need to prepare partitions, create physical volumes, Volume groups, logical volumes, and finally create a file system. However, if ssm is used, yicuerzu can be implemented overnight!

The purpose of the following command is to create a storage pool named mypool, create a mb lvm volume named lv01 in the storage pool, and format the volume using the XFS file system, and mount it to/mnt/test.

[Root @ kang ~] # Mkdir/mnt/test

[Root @ kang ~] # Ssm create-s 500 M-n lv01 -- fstype xfs-p mypool/dev/sde/mnt/test/

Verify the result created by ssm

Or execute ssm list

Add a physical disk (sdf) to the LVM pool

[Root @ kang ~] # Ssm add-p mypool/dev/sdf

After a new device is added to the storage pool, the storage pool automatically expands, depending on the size of the device. Check the size of the storage pool named mypool and execute ssm list.

Next, we will expand the existing LVM volumes.

Increase the size of the/dev/mypool/lv01 volume by 300 MB.

If you have extra space in the storage pool, you can expand the existing disk volumes in the storage pool. To this end, use the resize option of the ssm command

[Root @ kang ~] # Ssm resize-s + 300 M/dev/mypool/lv01

Run ssm list to view the extended logical volume

We can see that the logical volume is expanded to 800 mb, that is, the size of the original file system is increased by 300 MB, but the size of the file system (Fs size) has not changed yet.

To enable the file system to identify the increased volume size, you need to expand the existing file system itself. There are different tools available to expand the existing file system, depending on which file system you use.

For example, there are resize2fs for EXT2/EXT3/EXT4, xfs_growfs for XFS, and Btrfs for btrfs.

In this example, we use Centos7 and the XFS file system is created by default. Therefore, we use xfs_growfs to expand the existing XFS file system.

[Root @ kang ~] # Xfs_growfs/dev/mypool/lv01

After the XFS file system is expanded, view the result.

Or execute # df-hT

We can see that the LVM expansion is successful.

Logical volume Snapshot

Generate snapshots for existing LVM volumes (such as/dev/mypool/lv01)

Once a snapshot is generated, it is stored as a special snapshot volume, storing all the data when the snapshot is generated in the original volume.

[Root @ kang ~] # Ssm snapshot/dev/mypool/lv01

[Root @ kang ~] # Ssm list snapshots

You can manually execute ssm snapshot to generate snapshots for each data change in the original LVM volume.

When the original LVM volume data is corrupted, you can use the snapshot to restore it.

For more information, see the preceding section.

For details about how to use a disk ssm, refer to the ssm help manual page.

For example:

Delete LVM volumes

# Ssm remove

Delete A storage pool

# Ssm remove

ISCSI Network Storage Service

Iscsi implements network storage. The storage end is called target and the storage end is called initiator. The target can provide storage space. The initiator is responsible for connecting to the iscsi Device, creating a file system in the iscsi Device, and accessing data. In the initiator, there is an additional hard disk.

Configure the target on the server side and use it as the iscsi storage device (which can be a whole disk, a partition, a logical volume, or a RAID array) released by the LUN ).

I have prepared two logical volumes as storage devices for iscsi.

First install target

[Root @ kang ~] # Yum-y install targetd targetcli

Start the service

[Root @ kang ~] # Systemctl enable target

[Root @ kang ~] # Systemctl start target

Set a firewall (keep the rules of the original firewall as much as possible)

[Root @ kang ~] # Firewall-cmd -- permanent -- add-port = 3260/tcp

Success

[Root @ kang ~] # Firewall-cmd -- reload

Success

Execute the targetcli Tool

Note: You can enter help to view the help information of targetcli.

Basic Ideas:

Create a shared block, create a target, create a lun on the target, and connect a lun to a block.

1. Create a block to roll up the name of the logic to be released.

Note: Roll up the/dev/vg00/lv00 logic name server0.disk1; roll up the/dev/mypool/lv01 logic name server0.disk2

Shows how to view blocks:

2. Create an iqn name to create an iscsi object.

Shows how to view iscsi objects:

3. Set the ACL to bind the iscsi object to the Client IP address or host name

Note: iqn.2015-06.com. benet: client1 is the name of the initiator, which needs to be set in the client.

4. Create a LUN and bind a block

One iscsi object can create multiple Luns (LUNO, LUN1 ...)

Execute ls to view:

Start the listener (Listening to port 3260 by default and the port number is unique. Delete the original port number first when modifying the port number)

Note: 172.24.3.5 is the IP address of the iscsi server Nic.

You can view the/etc/target/saveconfig. json configuration file, which stores the iscsi configuration.

Configuration on the initiator side:

1. Install software

# Yum install-y iscsi-initiator-utils

2. Give the initiator a name (the acl control account of the server)

# Vim/etc/iscsi/initiatorname. iscsi

The content is as follows:

3. Start the service

# Systemctl enable iscsi; systemctl start iscsi

4. Storage discovery

# Iscsiadm-m discovery-t st-p 172.24.3.5

5. log on to the storage

# Iscsiadm-m nodes-T iqn.2015-06.com. benet: disk1-p 172.24.3.5-l

Note:-l indicates the iscsi target to be connected.-A indicates that the iscsi target is disconnected.

Verify client iscsi connection

The remaining operations are the same as managing local disks.

Network Storage maps the space of a remote storage device (which can be a disk, partition, or logical volume) to a local storage device, you can perform operations as needed.

The contents written and read are all remote, and other users cannot operate the formatted space.

The remote device is mapped to a local disk. If a local disk is added, the serial number of the remote disk is changed. Therefore, you must use the uuid of the disk to set permanent mounting.

The blkid command lists the uuid of all disk devices.

Add permanent mounting to/etc/fstab

Uuid mount point nfs ults, _ netdev 0 0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.