Qemu KVM libvirt Manual (11): Managing Storage

Source: Internet
Author: User

When managing a VM guest on the VM host server itself, it is possible to access the complete file system of the VM host server in order to attach or create virtual hard disks or to attach existing images to the VM guest.

However, this is not possible when managing VM guests from a remote host. For this reason,libvirtSupports so called "storage pools" which can be accessed from remote machines.

libvirtKnows two different types of storage: volumes and pools.

Storage Volume

A storage volume is a storage device that can be assigned to a guest-a virtual disk or a CD/DVD/Floppy image. physically (on the VM host server) it can be a block device (a partition, a logical volume, etc .) or a file.

Storage pool

A storage pool basically is a storage resource on the VM host server that can be used for storing volumes

Physically it can be one of the following types:

File System Directory (DIR)
  • A directory for hosting image files.0
  • The files can be either one of the supported disk formats (raw, qcow2, or QED), or ISO images.
Physical disk device (Disk)
  • Use a complete physical disk as storage.
  • A partition is created for each volume that is added to the pool.

Pre-formatted block device (FS)
  • Specify a partition to be used in the same way as a File System Directory Pool (a directory for hosting image files ).
  • The only difference to using a file system directory is the fact thatlibvirtTakes care of mounting the device.
Iscsi target (iSCSI)
  • Set up a pool on an iSCSI target.
  • You need to have been logged into the volume once before, in order to use itlibvirt
  • Volume creation on iSCSI pools is not supported, Instead each existing logical unit number (Lun) represents a volume.
  • Each volume/Lun also needs a valid (empty) Partition Table or disk label before you can use it.
  • If missing, useFdiskTo add it
fdisk -cu /dev/disk/by-path/ip-192.168.2.100:3260-iscsi-iqn.2010-10.com.example:[...]-lun-2

LVM volume group (logical)

  • Use a LVM volume group as a pool.
  • You may either use a pre-defined volume group, or create a group by specifying the devices to use.
  • Storage volumes are created as partitions on the volume.

Multipath devices (mpath)

  • At the moment, multipathing support is limited to assigning existing devices to the guests.
  • Volume creation or processing ing multipathing fromlibvirtIs not supported.

Network Exported Directory (netfs)
  • Specify a network directory to be used in the same way as a File System Directory Pool (a directory for hosting image files ).
  • The only difference to using a file system directory is the fact thatlibvirtTakes care of mounting the directory.
  • Supported protocols are NFS and glusterfs.

SCSI host adapter (SCSI)
  • Use an scsi host adapter in almost the same way a an iSCSI target.
  • It is recommended to use a device name from/dev/disk/by-*Rather than the simple/dev/sdX, Since the latter may change
Managing storage VirshCreate a storage pool

Virsh pool-define directory_pool.xml

Directory Pool

      <pool type="dir">        <name>virtimages</name>        <target>          <path>/var/lib/virt/images</path>        </target>      </pool>

Filesystem pool

This block device will be mounted and files managed in the directory of its mount point.

      <pool type="fs">        <name>virtimages</name>        <source>          <device path="/dev/VolGroup00/VirtImages"/>        </source>        <target>          <path>/var/lib/virt/images</path>        </target>      </pool>

Network filesystem pool

Instead of requiring a local block device as the source, it requires the name of a host and path of an exported directory. it will mount this network filesystem and manage files within the directory of its mount point.

      <pool type="netfs">        <name>virtimages</name>        <source>          

Logical volume pools

This provides a pool based on an LVM volume group. for a pre-defined LVM volume group, simply providing the group name is sufficient, while to build a new group requires providing a list of source devices to serve as physical volumes.

      <pool type="logical">        <name>HostVG</name>        <source>          <device path="/dev/sda1"/>          <device path="/dev/sdb1"/>          <device path="/dev/sdc1"/>        </source>        <target>          <path>/dev/HostVG</path>        </target>      </pool>

Disk volume pools

This provides a pool based on a physical disk. volumes are created by adding partitions to the disk.

      <pool type="disk">        <name>sda</name>        <source>          <device path=‘/dev/sda‘/>        </source>        <target>          <path>/dev</path>        </target>      </pool>

Iscsi volume pools

This provides a pool based on an iSCSI target. volumes must be pre-allocated on the iSCSI server, and cannot be created via the libvirt APIs. since/dev/xxx names may change each time libvirt logs into the iSCSI target, it is recommended to configure the pool to use/dev/disk/by-pathOr/dev/disk/by-idFor the target path.

      <pool type="iscsi">        <name>virtimages</name>        <source>          

SCSI volume pools

This provides a pool based on a scsi hba. volumes are preexisting SCSI Luns, and cannot be created via the libvirt APIs. since/dev/xxx names aren't generally stable, it is recommended to configure the pool to use/dev/disk/by-pathOr/dev/disk/by-idFor the target path.

      <pool type="scsi">        <name>virtimages</name>        <source>          <adapter name="host0"/>        </source>        <target>          <path>/dev/disk/by-path</path>        </target>      </pool>

RBD pools

This storage Driver provides a pool which contains all RBD images in a rados pool. RBD (rados block device) is part of the CEpH distributed storage project.
This backendOnlySupports qemu with RBD support. kernel RBD which exposes RBD devices as Block devices in/Dev isNotSupported. RBD images created with this storage backend can be accessed through kernel RBD if configured manually, but this backend does not provide mapping for these images.
Images created with this backend can be attached to qemu guests when qemu is built with RBD support (since qemu 0.14.0 ).

      <pool type="rbd">        <name>myrbdpool</name>        <source>          <name>rbdpool</name>            Listing pools and volumes
virsh pool-list –details
virsh pool-info POOL
virsh vol-list --details POOL
Starting, stopping and deleting pools
virsh pool-destroy POOL
virsh pool-delete POOL
virsh pool-start POOL
Adding, cloning, deleting volumes to a storage pool

Virsh vol-create-as virtimages newimage 12g -- format qcow2 -- Allocation 4G

vol-clone NAME_EXISTING_VOLUME NAME_NEW_VOLUME --pool POOL
virsh vol-delete NAME --pool POOL
Use LVM storage devices on libvirt

Http://www.ibm.com/developerworks/cn/linux/l-cn-libvirt-lvm/

Introduction

Libvirt is a set of management tools that can interact with multiple virtual machines on Linux. It supports KVM/qemu, xen, lxc, openvz, virtual box, VMWare ESX/gsx, and hyper-v. Libvirt provides support for various storage media, including local file systems, network file systems, iSCSI, LVM, and other back-end storage systems. The LVM (Linux Volume Management) system is a widely used storage device for Linux servers. The methods described in this article are applicable to KVM/qemu virtual machines. They mainly involve the use of LVM storage devices in libvirt and the use of libvirt-based Command Line Virtual Machine Management Tool virsh.

The Storage Management in libvirt is independent of Virtual Machine Management. That is, operations on storage pools and volumes are independent of operations on virtual machines. Therefore, in storage management, virtual machines do not need to exist. You can allocate storage resources when virtual machines need to store resources, flexible. Libvirt supports backend storage types

In order to provide different backend storage devices with uniform interfaces for virtual machines, libvirt divides storage management into two aspects: storage volume (volume) and storage pool ).

A storage volume is a storage device that can be allocated to virtual machines. It corresponds to a mount point in a virtual machine, and can be a virtual machine disk file or a real disk partition physically.

A storage pool is a storage resource from which storage volumes can be generated. The backend supports the following storage media:

  • Directory Pool: a directory of the host is used as the storage pool. The directory contains files of various types such as virtual machine disk files and image files.
  • Local file system pool: Block devices formatted by the host are used as storage pools. Supported file system types include ext2, ext3, and vfat.
  • Network File System pool: the exported directory of the Remote Network File System server is used as the storage pool. The default value is the NFS Network File System.
  • Logical volume pool: uses the created LVM volume group, or provides a series of source devices that generate the volume group. libvirt creates a volume group on it to generate a storage pool.
  • Disk volume pool: the disk is used as the storage pool.
  • Iscsi volume pool: The iSCSI Device is used as the storage pool.
  • SCSI volume pool: uses a SCSI device as the storage pool.
  • Multi-device pool: use multiple devices as the storage pool.
Status transition of stored objects in libvirt

Three types of storage objects in libvirt are as follows: storage pool, storage volume, and device status Transition 1.

Figure 1. Object state transition in libvirt

Storage volumes are divided from the storage pool. storage volumes are allocated to virtual machines as available storage devices. The ID allocated by the storage pool in libvirt indicates that it becomes a manageable object in libvirt. The generated volume group VG (volume group) has a storage pool that can be divided into storage volumes, the storage volume can be divided only when the status is active. Libvirt preparations for using the logical volume pool

Reconfiguration and compilation

Libvirt does not support LVM by default, so you need to re-compile libvirt to use it. Use the -- with-storage-LVM option to reconfigure the libvirt source code and recompile libvirt:

Listing 1. recompile libvirt

 $./autogen.sh --with-storage-lvm – system  $make

Prepare the physical disk for generating the volume group

Use the fdisk tool in the host to format the physical volume to the Linux LVM format (ID: 8e ). The generated physical volume should be in the following format:

List 2. Physical Volume format

 $sudo fdisk -l  /dev/sdc1 1 478 963616+ 8e Linux LVM  /dev/sdc2             479         957      965664   8e  Linux LVM

Prepare to generate the XML file of the storage pool

Store the XML file in the/etc/libvirt/storage directory of the host. The following is an example of an XML file:

Listing 3. Generating the XML file of the storage pool

 <pool type="logical">  <name>lvm_pool</name>  <source>  <device path="/dev/sdc1"/>  <device path="/dev/sdc2"/>  </source>  <target>  <path>/lvm_pool</path>  </target>  </pool>

If the pool type is logical, the storage pool type used is LVM, the source path is the path of the physical volume in the host, and the destination path is the target ing path of the storage pool generated on the host, the logical volume generated later will be in this directory of the host. Create a libvirt storage pool

Create a storage pool for the first time

A storage pool is defined by the preceding XML file. If the XML file is already in the/etc/libvirt/storage directory before libmongod is started, the storage pool is automatically defined after libmongod is started, skip this step.

Listing 4. Define a storage pool

 $virsh pool-define /etc/libvirt/storage/lvm_pool.xml

An Inactive storage pool is defined in libvirt, but the volume group corresponding to the pool is not initialized yet. The status of the generated pool is inactive:

Listing 5. view the status of the volume group

$ Virsh pool-list-all name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool is not active no

Creating a storage pool will generate a volume group corresponding to the storage pool.

Listing 6. Creating a storage pool

 $virsh pool-build lvm_pool

After this step is completed, a volume group named lvm_pool is generated on the host.

Listing 7. view the volume group generated on the host

 $sudo vgdisplay  --- Volume group ---  VG Name lvm_pool  System ID  Format                lvm2

Run the following command to make the storage pool active when you need to use the storage pool:

Listing 8. Start the storage pool

 $virsh pool-start lvm_pool

Create a storage pool

The operation for creating a storage pool is equivalent to the combination of the pool-define operation and the pool-start operation. That is to say, the creation operation is applicable when the volume group has been generated but is not managed in libvirt.

Listing 9. Creating a storage pool

 $virsh pool-create /etc/libvirt/storage/lvm_pool.xml

Listing 10. status after creation

$ Virsh pool-list name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool activity no
Allocate volumes from the storage pool

When the storage pool is active and a corresponding volume group has been generated, you can divide logical volumes from the storage pool for future use.

Listing 11. Creating a volume

 $virsh vol-create-as --pool lvm_pool --name vol3 --capacity 30M

The -- pool specifies the storage pool (volume group) to which the logical volume belongs, the name specifies the logical volume name, and the Capacity specifies the size of the allocated volume.

Listing 12. view the volume groups in the storage pool

Virsh # vol-list pic_pool2 name path ----------------------------------------- vol1/dev/lvm_pool/vol1 vol2/dev/lvm_pool2/vol2 vol3/dev/lvm_pool2/vol3
Use volumes in virtual machines

Listing 13. Allocating volumes to virtual machines

 $virsh attach-disk – domain dom1 – -source /dev/pic_pool2/vol1 – -target sda

The domain option specifies the virtual machine to be attached to the logical volume, the source option specifies the path of the logical volume in the host, and the target specifies the device name in the virtual machine.

After this step is completed, restart the virtual machine to view the/dev/SDA device in the virtual machine. In the virtual machine, this/dev/SDA is a bare device. You only need to partition and format it to mount and use it.

Listing 14. Check whether Volume Allocation is successful

 $virsh domblklist dom1  Target Source  ------------------------------------------------  vda /var/lib/libvirt/images/redhat2.img  hdc -  sda        /dev/pic_pool2/vol3

Listing 15. Detach a volume from a VM

 virsh # detach-disk – -domain dom1 --target sda

The/dev/SDA device is invisible to the virtual machine, and the logical volume is successfully detached from the virtual machine. Delete volumes in a storage pool

After a volume is deleted, the storage space corresponding to the volume is returned to the storage pool.

Listing 16. Deleting volumes in a storage pool

Virsh # vol-delete vol3 -- pool pic_pool2 volume vol3 deleted
Deactivating, deleting, and undefining a storage pool

Disable storage pool

After the storage pool is stopped, all the storage volumes on it become unavailable, that is, the virtual machines that use it cannot see this device. You cannot create a volume from this storage pool.

Listing 17. disabling a storage pool

Virsh # pool-destroy pic_pool2 destroy pool pic_pool2

Delete A storage pool

After a storage pool is completely deleted, libvirt no longer manages all resources corresponding to the storage pool, and the volume group corresponding to the storage pool on the host is also deleted.

Listing 18. Deleting a storage pool

Virsh # pool-delete pic_pool2 pool pic_pool2 deleted

Cancel storage pool Definition

Even if the storage pool is deleted, it still occupies certain resources in the libvirt storage driver. You can see this pool.

Listing 19. status after the storage pool is deleted

$ Virsh pool-list-all name status automatically starts --------------------------------------------- default activity Yes directory_pool activity Yes lvm_pool is not active no

After you use pool-undefine to cancel the definition of a storage pool, the resources occupied by the storage pool are completely released, and the storage drive cannot see the existence of the storage pool.

Listing 20. Canceling storage pool Definitions

 $virsh pool-undefine lvm_pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.