How to migrate from VMware and Hyper-V to OpenStack

Source: Internet
Author: User

Introduction

I migrated >120 VMware virtual machines (Linux and Windows) from VMware ESXi to OpenStack. In a lab environment I also migrated from Hyper-V with these steps. Unfortunately I am not allowed to publish the script files I used for this migration, but I can publish the steps and Comm ANDs that I used to migrate the virtual machines. With the steps and commands, it should is easy-to-create scripts that does the migration automatically.

Just to make it clear, these steps does not convert traditional (non-cloud) applications to cloud ready applications. In this case we started to use OpenStack as a traditional hypervisor infrastructure.

Update 9 September 2015:the Newer versions of Libguestfs-tools and qemu-img convert handle VMDK files very well (I had so Me issues with older versions of the tools), so the migration can is more efficient. I removed the conversion steps from a vmdk to a VMDK (single file) and from a vmdk to RAW. The migration speed would be doubled by reducing these steps.

Disclaimer:this information is provided as-is. I'll decline any responsibility caused by or with these steps and/or commands. I suggest you don't try and/or test these commands in a production environment. Some commands is very powerful and can destroy configurations and data in Ceph and OpenStack. So always use this information with care and great responsibility.

Global Steps
    1. Inject VirtIO Drivers
    2. Expand partitions (optional)
    3. Customize the virtual machine (optional)
    4. Create Cinder Volumes
    5. Convert VMDK to Ceph
    6. Create Neutron Port (optional)
    7. Create and boot instance in OpenStack
Specifications

Here is the specifications of the infrastructure I used for the migration:

    • Cloud Platform:openstack Icehouse
    • Cloud Storage:ceph
    • Windows instances:windows Server 2003 to 2012R2 (all versions, except Itanium)
    • Linux Instances:rhel5/6/7, SLES, Debian and Ubuntu
    • Only VMDK files-from-ESXi can be converted, I is not able to-convert VMDK files from VMware Player with qemu-img
    • I have no migration experience with encrypted source disks
    • OpenStack provides VirtIO paravirtual hardware to instances
Requirements

A Linux ' Migration node ' (tested with Ubuntu 14.04/15.04, RHEL6, Fedora 19-21) with:

    • Operating System (successfully tested with the following):
    • RHEL6 (RHEL7 did not has the "Libguestfs-winsupport"-necessary for NTFS formatted disks-package available at the time O F writing)
    • Fedora, 21
    • Ubuntu 14.04 and 15.04
    • Network connections to a running OpenStack environment (duh). Preferable not over the Internet, as we need ' super admin ' permissions. Local network Connections is usually faster than connections over the Internet.
    • Enough hardware power to convert disks and run instances in KVM (sizing depends on the instances you want to migrate in a Certain amount of time).

We used a server with 8x Intel Xeon e3-1230 @ 3.3GHz, 32GB RAM, 8x 1TB SSDs and we managed to migrate >500GB per hour. However, it really depends on the usage of the disk space of the instances. But also my old company laptop (Core i5 and 4GB of RAM and a old 4500RMP HDD) worked, but obviously the performance is V ery poor.

    • Local sudo (root) permissions on the Linux migration node
    • QEMU/KVM Host
    • Permissions to OpenStack (via Keystone)
    • Permissions to Ceph
    • Unlimited network access to the OpenStack API and Ceph (I had not figured out of the network ports that is necessary)
    • VirtIO drivers (downloadable from Red Hat, Fedora, and more)
    • Packages (all Packages should is in the default distributions repository):

"Python-cinderclient" (to control volumes)

"Python-keystoneclient" (for authentication to OpenStack)

"Python-novaclient" (to control instances)

"Python-neutronclient" (to control networks)

"Python-httplib2" (to being able to communicate with Web service)

"Libguestfs-tools" (to access the disk files)

"Libguestfs-winsupport" (should be separately installed in RHEL based systems only)

"Libvirt-client" (to control KVM)

"Qemu-img" (to convert disk files)

"Ceph" (to import virtual disk into Ceph)

"Vmware-vdiskmanager" (to expand VMDK disks, downloadable from VMware)

Steps 1. Inject VirtIO Drivers
1.1 Windows Server

Since Windows Server and Windows 8.0, the driver store is protected by Windows. It is a very hard-to-inject drivers in an offline Windows disk. Windows Server does not boot from VirtIO hardware by default. So, I took these next steps to install the VirtIO drivers into Windows. Note that these steps should work for all tested Windows versions (2003/2008/2012).

    1. Create A new KVM instance. Make sure the Windows vmdk disk is created as IDE disk! The network card should be a VirtIO device.
    2. ADD A extra VirtIO disk, so Windows can install the VirtIO drivers.
    3. OFF Course should add a VirtIO ISO or floppy drive which contains the drivers. You could also inject the driver files with virt-copy-in and inject the necessary registry settings (see  paragraph 4 .4) for automatic installation of the drivers.
    4. Start the virtual machine and give Windows on minutes to find the new VirtIO hardware. Install the drivers for all newly found hardware. Verify that there is no devices that has no driver installed.
    5. Shutdown the system and remove the extra VirtIO disk.
    6. Redefine the Windows vmdk disk as VirtIO disk (this is IDE) and start the instance. It should now boot without problems. Shut down the virtual machine.
1.2 Linux (kernel 2.6.25 and above)

Linux kernels 2.6.25 and above has already built-in support for VirtIO hardware. So there are no need to inject VirtIO drivers. Create and start a new KVM virtual machine with VirtIO hardware. When LVM partitions does not mount automatically, the run this to fix:

(Log in)

Mount-o REMOUNT,RW/
Pvscan
Vgscan
Reboot

(After the reboot all LVM partitions should is mounted and Linux should boot fine)

Shut down the "virtual machine" when done.

1.3 Linux (kernel older than 2.6.25)

Some Linux Distributions provide VirtIO modules for older kernel versions. Some Examples:

    • Red Hat provides VirtIO support for RHEL 3.9 and up
    • SuSe provides VirtIO support for SLES SP3 and up

The steps for older kernels is:

    1. Create KVM instance:
    2. Linux (prior to kernel 2.6.25): "Create and Boot KVM instance with IDE hardware" (This was limited to 4 disks in KVM, as-only One IDE controller can be configured which results in 4 disks!). I have not tried SCSI or SATA as I only had old Linux machines with no more than 4 disks. Linux should start without issues.
    3. Load the Virtio modules (this is distribution specific): RHEL (older versions): https://access.redhat.com/documentation/e n-us/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/ ch10s04.html) and for SLES SP3 systems:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_ Virtio_install.htm
    4. Shutdown the instance.
    5. Change all disks to VirtIO disks and boot the instance. It should now boot without problems.
    6. Shut down the "virtual machine" when done.

For Red Hat, See:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/virtualization_ Host_configuration_and_guest_installation_guide/ch10s04.html

For SuSe, see:https://www.suse.com/documentation/opensuse121/book_kvm/data/app_kvm_virtio_install.htm

1.4 Windows Server (and older versions); deprecated

For Windows versions Prior-could also use these steps to insert the drivers (the steps in 4.1 should also work For Windows 2003/2008).

  1. Copy all VirtIO driver files (from the downloaded VirtIO drivers) of the corresponding in Windows version and architecture to C:\Drivers\. You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  2. Copy *.sys files to%windir%\system32\drivers\ (the correct directory of want to use Virt-ls. Note that Windows was not very consistent with lower and upper case characters). You can use the tool virt-copy-in to copy files and folders into the virtual disk.
  3. The Windows registry should combine the hardware ID ' s and drivers, but there is no VirtIO drivers installed in Windows by Default. So we need to does this by ourselves. You could inject the registry file with Virt-win-reg. If you choose-to-copy all VirtIO drivers to a other location than C:\Drivers, you must the "DevicePath" variable i n the last line (the most easy-to-change it in some Windows machine and then export the registry file, and use that Line).

Registry file (I called the file Mergeviostor.reg, as it holds the VirtIO storage information only):

Windows Registry Editor Version 5.00
[Hkey_local_machine\system\controlset001\control\criticaldevicedatabase\pci#ven_1af4&dev_1001&subsys_ 00000000] "Classguid" = "{4d36e97b-e325-11ce-bfc1-08002be10318}" "Service" = "VioStor"
[Hkey_local_machine\system\controlset001\control\criticaldevicedatabase\pci#ven_1af4&dev_1001&subsys_ 00020000] "Classguid" = "{4d36e97b-e325-11ce-bfc1-08002be10318}" "Service" = "VioStor"
[Hkey_local_machine\system\controlset001\control\criticaldevicedatabase\pci#ven_1af4&dev_1001&subsys_ 00021AF4] "Classguid" = "{4d36e97b-e325-11ce-bfc1-08002be10318}" "Service" = "VioStor"
[Hkey_local_machine\system\controlset001\control\criticaldevicedatabase\pci#ven_1af4&dev_1001&subsys_ 00021AF4&REV_00] "Classguid" = "{4d36e97b-e325-11ce-bfc1-08002be10318}" "Service" = "VioStor"
[Hkey_local_machine\system\controlset001\control\criticaldevicedatabase\pci#ven_1af4&dev_1004&subsys_ 00081AF&REV_00] "Classguid" = "{4d36e97b-e325-11ce-bfc1-08002be10318}" "Service" = "VioStor"
[Hkey_local_machine\system\controlset001\services\viostor] "ErrorControl" =dword:00000001 "Group" = "SCSI Miniport" "Start" =dword:00000000 "Tag" =dword:00000021 "Type" =dword:0 0000001 "ImagePath" = "System32\\drivers\\viostor.sys"
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion] "DevicePath" =hex (2): 25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,74,00,25,00,5c,00,69,00,6e, 00,66,00,3b,00,63,00,3a,00,5c,00,44,00,72,00,69,00,76,00,65,00,72,00,73,00,00,00

When these steps has been executed, Windows should boot from VirtIO disks without BSOD. Also all other drivers (network, balloon etc.) should install automatically when Windows boots.

see:https://support.microsoft.com/en-us/kb/314082 (written for Windows XP, but it's still usable for Windows 2003 and 2 008).

See Also:http://libguestfs.org/virt-copy-in.1.html and Http://libguestfs.org/virt-win-reg.1.html

2. Expand partitions (optional)

Some Windows Servers I migrated had limited free disk space on the Windows partition. There was wasn't enough space to install new management applications. So, I used the Vmware-vdiskmanager tool with the '-X ' argument (available from vmware.com) to increase the disk size. Still need to expand the partition from the operating system. You can do this while customizing the virtual machine in the next step.

3. Customize the virtual machine (optional)

To prepare the operating system to run in OpenStack, you probably would like to uninstall some software (like VMware Tools and drivers), change passwords and install new management tooling etc. You can automate this to writing a script that does this for you (those scripts is beyond the scope of this article). You should is able to inject the script and files with the virt-copy-in command into the virtual disk.

3.1 Automatically start scripts in Linux

I started the scripts within Linux manually as I only had a few Linux servers to migrate. I Guess Linux engineers should is able to completely automate this.

3.2 Automatically start scripts in Windows

I Choose the RunOnce method to start scripts for Windows boot as it works on all versions for Windows that I had to migrate. You can put a script on the RunOnce by injecting a registry file. RunOnce scripts is only the run when a user have logged in. So, you should also inject a Windows administrator UserName, Password and set AutoAdminLogon to ' 1 '. When Windows starts, it would automatically log in as the defined user. Make sure to shut the "virtual machine" when done.

Example registry file to auto login into Windows (with user ' Administrator ' and password ' password ') and start the C:\Star Tupwinscript.vbs.:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\RunOnce] "Script" = "cscript c:\\startupwinscript.vbs" "Parameters" = ""
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon] "AutoAdminLogon" = "1" "UserName" = "Administrator" "Password" = "Password"
4 . Create Cinder Volumes

For every disk you want to import, you need to create a Cinder volume. The volume size in the Cinder command does not really matter, as we remove (and recreate with the import) the Ceph device In the next step. We Create the cinder volume only to create the link between cinder and Ceph.

Nevertheless, you should keep the volume size the same as the disk is planning to import. This was useful for the overview in the OpenStack dashboard (Horizon).

You create a cinder volume with the following command (the size are in GB and you can check the available volume types by cind ER type-list):

<name_ of_ disk> <size> <volumetype>

Note The volume ID (you can also find the volume ID with the following command) as we need the IDs in the next step.

<name_of_disk>

Cinder Command information:http://docs.openstack.org/cli-reference/content/cinderclient_commands.html

5. Convert VMDK to Ceph

As soon as the Cinder volumes is created, we can convert the VMDK disk files to RBD blocks (Ceph). But first we need to remove the actual Ceph disk. Make sure you remove the correct Ceph block device!

In the first place you should know in which Ceph pool the disk resides. Then remove the volume from Ceph (the Volume-id are the volume ID that you noted in the previous step ' Create Cinder volume S '):

<ceph_pool> rm volume-<volume-id>

Next step is to convert the VMDK file into the volume on Ceph (all ceph* arguments would result in better performance. The vmdk_disk_file variable is the complete path to the vmdk file. The Volume-id is the ID of that you noted before).

<vmdk_disk_file-o RBD RBD:<ceph_pool>/volume-<volume-id>

Do the-all virtual disks of the virtual machine.

Be careful! The RBD command is VERY powerful (your could destroy more data on Ceph than intended)!

6. Create Neutron port (optional)

In some cases-might want to set a fixed ip-address or a mac-address. You can do the to create a port with neutron and use that port in the next step (create and boot instance in OpenStack).

You should first know what's the Network_name is (Nova net-list) and you need the ' Label '. Only the network_name is mandatory. You could also add security groups by adding

--security-group <security_group_name>

Add this parameter for each security group, so if you want to add i.e. 6 security-groups, we should add this parameter 6 Times.

Neutron port-create--fixed-ip ip_address=<ip_address<mac_address  > <network_name><port_name>

Note the ID of the neutron port, you'll need it in the next step.

7. Create and boot instance in OpenStack

Now we had everything prepared to create a instance from the Cinder volumes and an optional neutron port.

Note the Volume-id of the boot disk.

Now you are only need to know the ID of the flavor you want to choose. Run Nova Flavor-list to get the flavor-id of the desired flavor.

Now you can create and boot the new instance:

< I nstance_name> <flavor_id > <boot_volume_id> --nic port-id=<neutron_port_id>

Note the Instance ID. Now, add all other disk of the instance by executing this command (if there is other volumes you want to add):

< I nstance_id> <volume_id>

http://www.npit.nl/blog/2015/08/13/migrate-to-openstack/

How to migrate from VMware and Hyper-V to OpenStack

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.