Introduction:This article describes how to manage and use LVM storage pools in libvirt, and shows how to use LVM storage for qemu virtual machines.
Address: http://www.ibm.com/developerworks/cn/linux/l-cn-libvirt-lvm/index.html
Introduction
Libvirt is a set of management tools that can interact with multiple virtual machines on Linux. It supports KVM/qemu, xen, lxc, openvz, virtual box, VMWare ESX/gsx, and hyper-v. Libvirt provides support for various storage media, including local file systems, network file systems, iSCSI, LVM, and other back-end storage systems. The LVM (Linux Volume Management) system is a widely used storage device for Linux servers. The methods described in this article apply to KVM/qemu virtual machines, mainly in libvirt
Using the LVM storage device method, using the libvirt-based Command Line Virtual Machine Management Tool virsh.
The Storage Management in libvirt is independent of Virtual Machine Management. That is, operations on storage pools and volumes are independent of operations on virtual machines. Therefore, in storage management, virtual machines do not need to exist. You can allocate storage resources when virtual machines need to store resources, flexible.
Back to Top
Libvirt supports backend storage types
In order to provide different backend storage devices with uniform interfaces for virtual machines, libvirt divides storage management into two aspects: storage volume (volume) and storage pool ).
A storage volume is a storage device that can be allocated to virtual machines. It corresponds to a mount point in a virtual machine, and can be a virtual machine disk file or a real disk partition physically.
A storage pool is a storage resource from which storage volumes can be generated. The backend supports the following storage media:
- Directory Pool: a directory of the host is used as the storage pool. The directory contains files of various types such as virtual machine disk files and image files.
- Local file system pool: Block devices formatted by the host are used as storage pools. Supported file system types include ext2, ext3, and vfat.
- Network File System pool: the exported directory of the Remote Network File System server is used as the storage pool. The default value is the NFS Network File System.
- Logical volume pool: uses the created LVM volume group, or provides a series of source devices that generate the volume group. libvirt creates a volume group on it to generate a storage pool.
- Disk volume pool: the disk is used as the storage pool.
- Iscsi volume pool: The iSCSI Device is used as the storage pool.
- SCSI volume pool: uses a SCSI device as the storage pool.
- Multi-device pool: use multiple devices as the storage pool.
Back to Top
Status transition of stored objects in libvirt
Three types of storage objects in libvirt are as follows: storage pool, storage volume, and device status Transition 1.
Figure 1. Object state transition in libvirt
Storage volumes are divided from the storage pool. storage volumes are allocated to virtual machines as available storage devices. The ID allocated by the storage pool in libvirt indicates that it becomes a manageable object in libvirt. The generated volume group VG (volume group) has a storage pool that can be divided into storage volumes, the storage volume can be divided only when the status is active.
Back to Top
Libvirt preparations for using the logical volume pool
Reconfiguration and compilation
Libvirt does not support LVM by default, so you need to re-compile libvirt to use it. Use the -- with-storage-LVM option to reconfigure the libvirt source code and recompile libvirt:
Listing 1. recompile libvirt
$./autogen.sh --with-storage-lvm – system $make |
Prepare the physical disk for generating the volume group
Use the fdisk tool in the host to format the physical volume to the Linux LVM format (ID: 8e ). The generated physical volume should be in the following format:
List 2. Physical Volume format
$sudo fdisk -l /dev/sdc1 1 478 963616+ 8e Linux LVM /dev/sdc2 479 957 965664 8e Linux LVM |
Prepare to generate the XML file of the storage pool
Store the XML file in the/etc/libvirt/storage directory of the host. The following is an example of an XML file:
Listing 3. Generating the XML file of the storage pool
<pool type="logical"> <name>lvm_pool</name> <source> <device path="/dev/sdc1"/> <device path="/dev/sdc2"/> </source> <target> <path>/lvm_pool</path> </target> </pool> |
If the pool type is logical, the storage pool type used is LVM, the source path is the path of the physical volume in the host, and the destination path is the target ing path of the storage pool generated on the host, the logical volume generated later will be in this directory of the host.
Back to Top
Create a libvirt storage pool
Create a storage pool for the first time
A storage pool is defined by the preceding XML file. If the XML file is already in the/etc/libvirt/storage directory before libmongod is started, the storage pool is automatically defined after libmongod is started, skip this step.
Listing 4. Define a storage pool
$virsh pool-define /etc/libvirt/storage/lvm_pool.xml |
An Inactive storage pool is defined in libvirt, but the volume group corresponding to the pool is not initialized yet. The status of the generated pool is inactive:
Listing 5. view the status of the volume group
$ Virsh pool-list-all name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool is not active no |
Creating a storage pool will generate a volume group corresponding to the storage pool.
Listing 6. Creating a storage pool
$virsh pool-build lvm_pool |
After this step is completed, a volume group named lvm_pool is generated on the host.
Listing 7. view the volume group generated on the host
$sudo vgdisplay --- Volume group --- VG Name lvm_pool System ID Format lvm2 |
Run the following command to make the storage pool active when you need to use the storage pool:
Listing 8. Start the storage pool
$virsh pool-start lvm_pool |
Create a storage pool
The operation for creating a storage pool is equivalent to the combination of the pool-define operation and the pool-start operation. That is to say, the creation operation is applicable when the volume group has been generated but is not managed in libvirt.
Listing 9. Creating a storage pool
$virsh pool-create /etc/libvirt/storage/lvm_pool.xml |
Listing 10. status after creation
$ Virsh pool-list name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool activity no |
Back to Top
Allocate volumes from the storage pool
When the storage pool is active and a corresponding volume group has been generated, you can divide logical volumes from the storage pool for future use.
Listing 11. Creating a volume
$virsh vol-create-as --pool lvm_pool --name vol3 --capacity 30M |
The -- pool specifies the storage pool (volume group) to which the logical volume belongs, the name specifies the logical volume name, and the Capacity specifies the size of the allocated volume.
Listing 12. view the volume groups in the storage pool
Virsh # vol-list pic_pool2 name path ----------------------------------------- vol1/dev/lvm_pool/vol1 vol2/dev/lvm_pool2/vol2 vol3/dev/lvm_pool2/vol3 |
Back to Top
Use volumes in virtual machines
Listing 13. Allocating volumes to virtual machines
$virsh attach-disk – domain dom1 – -source /dev/pic_pool2/vol1 – -target sda |
The domain option specifies the virtual machine to be attached to the logical volume, the source option specifies the path of the logical volume in the host, and the target specifies the device name in the virtual machine.
After this step is completed, restart the virtual machine to view the/dev/SDA device in the virtual machine. In the virtual machine, this/dev/SDA is a bare device. You only need to partition and format it to mount and use it.
Listing 14. Check whether Volume Allocation is successful
$virsh domblklist dom1 Target Source ------------------------------------------------ vda /var/lib/libvirt/images/redhat2.img hdc - sda /dev/pic_pool2/vol3 |
Listing 15. Detach a volume from a VM
virsh # detach-disk – -domain dom1 --target sda |
The/dev/SDA device is invisible to the virtual machine, and the logical volume is successfully detached from the virtual machine.
Back to Top
Delete volumes in a storage pool
After a volume is deleted, the storage space corresponding to the volume is returned to the storage pool.
Listing 16. Deleting volumes in a storage pool
Virsh # vol-delete vol3 -- pool pic_pool2 volume vol3 deleted |
Back to Top
Deactivating, deleting, and undefining a storage pool
Disable storage pool
After the storage pool is stopped, all the storage volumes on it become unavailable, that is, the virtual machines that use it cannot see this device. You cannot create a volume from this storage pool.
Listing 17. disabling a storage pool
Virsh # pool-destroy pic_pool2 destroy pool pic_pool2 |
Delete A storage pool
After a storage pool is completely deleted, libvirt no longer manages all resources corresponding to the storage pool, and the volume group corresponding to the storage pool on the host is also deleted.
Listing 18. Deleting a storage pool
Virsh # pool-delete pic_pool2 pool pic_pool2 deleted |
Cancel storage pool Definition
Even if the storage pool is deleted, it still occupies certain resources in the libvirt storage driver. You can see this pool.
Listing 19. status after the storage pool is deleted
$ Virsh pool-list-all name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool is not active no |
After you use pool-undefine to cancel the definition of a storage pool, the resources occupied by the storage pool are completely released, and the storage drive cannot see the existence of the storage pool.
Listing 20. Canceling storage pool Definitions
$virsh pool-undefine lvm_pool