Kvm I/O slowness on RHEL 6
Http://www.ilsistemista.net/index.php/virtualization/11-kvm-io-slowness-on-rhel-6.html? Limitstart = 0
Over one year has passed since my last Virtual Machine hypervisor comparison so, in the last week, I was preparing an article showing a face to face comparison between RHEL 6 KVM Technologies versus Oracle virtualbox 4.0 product. I spent several days creating some nice, automated script to evaluate these two products under different point of views, and I was quite confident that the benchmark Sessi On wocould be completed without too much trouble. So, I installed Red Hat Enterprise Linux 6 (license courtesy of Red Hat Inc.-Thank you guys !) On my workstation and I begin the virtual images installation.
However, the unexpected happened: Using KVM, a Windows Server 2008 R2 Foundation installation took almost 3 hours, while normally it shoshould be completed in about 30-45 minutes. similarly, the installation of the base system anticipating the "real" Debian 6.0 installation took over 5 minutes, when normally it can be completed in about 1 minute. in short: The KVM virtual machines were affected by awfully slow disk I/O subsystem. in previous tests, I saw that kvm I/O subsystem was a bit slower, but not by so much; clearly, something was impairing my kvm I/O speed. I tried different combination of specified alized disk controllers (IDE or virtio) and cache settings, but without success. I also changed my physical disk filesystem to ext3, to avoid any possible, hypothetical ext4 speed regression, but again with no results: The KVM slow I/O speed problem remained.
I needed a solution-and a real one: with such awfully slow I/O, the KVM guests were always Ally unusable. after some wasted hours, I decided to run some targeted, systematic tests regarding VM image formats, disk controllers, cache settings and preallocation policy. now that I find the solution and I run my KVM guest at full speed, I am very happy-and I wowould like to share my results with you.Testbed and Methods
First, let me describe the workstation used in this round of tests; system specifications are:
-
CPU: Core-i7 860 (quad cores, eight threads) @ 2.8 GHz and with 8 Mb L3 Cache
-
Ram: 8 GB ddr3 (4x2 GB) @ 1333 MHz
-
Disks: 4x WD Green 1 TB in software raid 10 Configuration
-
OS: Red Hat Enterprise Linux 6 64 bit
The operation system was installed with "Basic server" profile and then I selectively installed the varous other software required (lib1_d, qemu, etc). Key systems software are:
-
Kernel version 2.6.32-71.18.1.el6.x86 _ 64
-
Qemu-KVM version 0.12.1.2-2.113.el6 _ 0.6.x86 _ 64
-
Libvirt versio 0.8.1-27. El. x86_64
-
Virt-Manager version 0.8.4-8. el6.noarch
As stated before, initially all host-system partitions were formatted in ext4, but to avoid any possible problem related to the new filesystem, I changed the VM-storing partition to ext3.
To maid I/O speed, I timed the Debian 6.0 (x86_64 version) Basic System installation. This is the process that, during Debian installation, immediately follow the partitions creation and format phase.
Let me thanks again Red Hat Inc.-and especially Justin Clift-to give me a free RHEL 6 license.
OK-I know that you want to know why on earth kvm I/O was so slow. However, first you had to understand something about caching and preallocation policies.On caching and preallocation
Note: In this page, I try to condensate in very little space some hard-to-explain concepts, so I had to do some approximations. I ask the expert reader to forgive me for the over-simplification.
Normally, a virtual guest system use an host-side file to store its data: This file represent a virtual disk, that the guest use as a normal, physical disk. however, from the host view this virtual disk is a normal data file and it may be subject to caching and preallocation.
In this context, caching is the process to "hide" some disk-related data to physical Ram. when we use that cache to storein Ram only data previusly read from the disk, we speak about a read cache, orWrite-throughCache. When we store in Ram some data that will be later flushed to disk, we speak abut a write cache, orWrite-backCache. A write-back cache, By caching write request in the fast Ram, has higher performance; however, it is also more prone to data loss than a write-through one, as the latter only cache read requests and immediately write to disk any data.
As disk I/O is a very important parameter, Linux and Windows o.s. generally use a write-back policy with periodic flush to the physical disk. however, when using an hypervisor to virtualize some guest system, you can initialize tively cache things twice (one time in the host memory and another time in the virtual guest memory ), so is often better to disable host-based caching on the Virtual Disk file and to let the guest system to manage its own caching. moreover, a host-side write-back policy on Virtual Disk File significantly increase the risk of data loss in case of guest crash.
KVM let you choose one of these three Cache Policy: No caching, write-through (read only cache) and write-back (read and write cache ).It also has a "default" setting that specified tively is an alias for the write-through one. as you will see, pick the right caching scheme is a crucial choice for fast guest I/O.
Now, some words about preallocation: This is the process to better prepare the Virtual Disk File to store the data written by the guest system. generally, preallocate a file means to fill it with zeros, so that the host system had to reserve in advance all the disk space assigned to the guest. in this manner, when the guest try to write to the virtual disk, it never waits for the host system to reserve the required space. some time, preallocation does not fill the target file with zeros, but only prepare some its internal data structure: in this case, we talk about metadata preallocation. raw disk format can use full preallocation, while qcow2 actually use metadata preallocation (there are some patches that force full preallocation, but are experimental ones ).
Why speak about caching and preallocation? Because the super-slow kvm I/O speed really boil down to this two parameters, as we are going to see.RAW image format Performance
Let's begin with some tests regarding the most basic disk image format-the raw format. raw images are very fast, but they miss a critical feature: the possibility to take real, fast snapshot of the virtual disk. so, they can be used only in situations were you not need a real snapshot support (or you have snapshosponcapability at the filesystem level-but this is another story ).
Raw reads and writes quickly, but snapshot is slow
How the raw format performs, and how caching affect the results?
As you can see,As long you stay away from write-through cache, raw image have very high speed. Note that in raw image with no caching or write-back policy, preallocation only have a small influence.
What about the much more feature-rich qcow2 format?Qcow2 image format Performance
The qcow2 format is the default qemu/KVM image format. It has some very interesting features, as compression and encryption, but especially it enable the use of real, file-level snapshots.
But how it performs?
Mmm... without metadata preallocation, it performs very badly.Enable metadata preallocation, stay away from write-through cache and it perform very well.
To better compare it to the raw format, I made a chart with the no-caching raw and qcow2 results:
While without metadata preallocation the qcow2 format is 5x slower then raw, with enabled metadata preallocation the two are practically tied. this prove that while raw format is primarily influenced by caching setting, qcow2 is much dependent on both the preallocation and caching policies.
The influence of the specified alized I/O controller
Another important thing to check is the influence of the specified alized I/O controller that is presented to the guest. KVM let you use not only the default ide virtual controller, but also a new, paravirtualized I/O controller called virtio. this has alized controller promise better speed and less CPU usage.
How it affect the results?
As you can see, the write-through scenario is the most affected one, while with the no-caching and write-back policies it has a lesser effect.
This does not means that the virtio is an unimportant project: the scope of this test was only to be sure that it don't comport any I/O slowness. in a following article I will analyze this very promising driver in a much more complete manner.
I/O slowness cause: Bad default settings
So, we can state that to obtain good I/O throughput from the qcow2 format, two conditions must be met:
However, using the virt-Manager GUI interface that is normally used to create virtual disks and guest systems on Red Hat and fedora,You can notEnableMetadata preallocation on qcow2 files.While the storage volume Creation Interface let you specify if you want to preallocate the virtual disk, this function actually only works with raw files; if you use a qcow2 file it does nothing.
To create a file with metadata preallocation, you must open a terminal and issue the "qemu-IMG create" command. For example, if you want to create ~ 10 Gb qcow2 with metadata preallocation, you must issue the command"Qemu-IMG create-F qcow2-O size = 10000000000, preallocation = metadata file. img".
Moreover, the default caching scheme is the write-through one.While generally the guest creation wizard correctly disable host-side cache, If you later add any virtual disk to the guest, often the disk is added with the "default" caching policy-a write-through one.
So, if you are using Red Hat Enterprise Linux or Fedora Linux as the host operating system for you virtualization server and you plan to use the qcow2 format,Remember to manually create preallocated Virtual Disk Files and to use a "none" Cache Policy(You can also use a "Write-back" policy, but be warned that your guests will be more prone to data loss ).Conclusions
First of all, don't let me wrong: I'm very exited about KVM and libvirt progresses. now we have not only a very robust hypervisor, but also some critical paravirtualized drivers, a good graphical interface and excellent host/guest remote management capabilities. I wocould publicly thanks all the talented guys involved in the realization of these great and important projects-Thank you boys!
However, it's a shame that the current virt-Manager GUI interface don't permit to perform metadata preallocation on qcow2 image format, as this image is much more feature-rich than the raw one. moreover, I wowould like to see not only the guest creation wizard, but all the guest editing windows to always default to no cache policy for virtual disk, but it is a secondary problem: it is not so difficult to manually change a parameter...
The first problem-No metadata preallocation on qcow2-is way more serous, as it can not be overcomed without resort to the command line. this problem shocould really be corrected as soon as possible. in the meantime, you can use the workaround described above, and remember to always check your virtual disk caching policy-don't use the "default" or "Write-through" settings.
I hope than this article can help you get the Max from the very good KVM, libvirt and related projects.