Create, manage, and migrate KVM virtual machines in CentOS

Source: Internet
Author: User

Create, manage, and migrate KVM virtual machines in CentOS

Kvm Virtual Machine Management
I. Environment
Role hostname ip OS
Kvm_server target 192.168.32.40/24 rhel6.0 _ x64
Vir_guest1 node4 192.168.32.34/24 rhel5.5 _ i386
Vir_guest2 node5 192.168.32.35/24 rhel5.5 _ i386
Manager 192.168.32.33/24 rhel5.5 _ i386
ESXi 192.168.2.8.0/24 ESXi 3.5
Ii. Security kvm
[Root @ target ~] # Yum install-y qemu-kvm.x86_64 qemu-kvm-tools.x86_64 # install kvm Kernel
[Root @ target ~] # Yum install libvirt. x86_64 libvirt-cim.x86_64 libvirt-client.x86_64 libvirt-java.noarch libvirt-python.x86_64 # install virt management tools
[Root @ target ~] # Modprobe kvm # load the kvm Kernel
[Root @ target ~] # Modprobe kvm-intel # enable kvm-intel kernel for intel cpu loading to support full virtualization. cpu support is required and can be set through bios.
[Root @ target ~] # Modprobe kvm-amd # amd cpu loading kvm-intel

[Root @ target ~] # Modprobe-ls | grep kvm # Check whether the kernel is enabled
Kernel/arch/x86/kvm. ko
Kernel/arch/x86/kvm/kvm-intel.ko
Kernel/arch/x86/kvm/kvm-amd.ko
[Root @ target ~] # Modprobe-ls | grep kvm-intel
Kernel/arch/x86/kvm/kvm-intel.ko

3. Install the guest Virtual Machine
1. Directly install and manage virtual machines through virt-manager (omitted)

2. Install the guest virtual machine through the command line
[Root @ target ~] # Yum install virt-viewer # To enable the graphical console to install a virtual client,
[Root @ target ~] # Virt-install \ # view the installation options in virt-install -- help
-- Name node4 \ # Virtual Machine name
-- Ram = 1024 \ # size of allocated memory, MB
-- Arch = x86_64 \ # simulated CPU architecture
-- Vcpus = 1 \ # configure the number of Virtual Machine vcpu
-- Check-cpu \ # check whether the vcpu exceeds the number of physical CPUs. If the number exceeds, a warning is issued.
-- OS-type = linux \ # type of the operating system to be installed, such as 'linux ', 'unix', and 'windows'
-- OS-variant = rhel5 \ # operating system version, for example, 'fedora6 ', 'rhel5', 'solaris10', 'win2k'
-- Disk path =/virhost/node7.img, device = disk, bus = virtio, size = 20, sparse = true \ # disk or image file used by the VM, size G
-- Bridge = br0 \ # specify the network and use a transparent bridge
-- Noautoconsole \ # do not enable the console automatically
-- Pxe # network installation

4. Use virsh to manage virtual machines
1. Switch the host
[Root @ target ~] # Virsh start node4 # boot
[Root @ target ~] # Virsh create/etc/libvirt/qemu/node4.xml # Start the host directly through the host configuration document
[Root @ target ~] # Virsh shutdown node4 # shutdown
[Root @ target ~] # Virsh destroy node4 # force power off
[Root @ target ~] # Virsh list -- all # view the virtual machine status
Id name status
----------------------------------
18 node4 running
-Disable node5
-Disable win8

2. add and delete virtual machines
[Root @ target ~] # Virsh define/etc/libvirt/qemu/node5.xml # Add a virtual machine according to the host configuration document
[Root @ target ~] # Virsh list -- all # node5 has been added
Id name status
----------------------------------
18 node4 running
-Disable node5
-Disable win8

[Root @ target ~] # Virsh undefine node5 # Remove a virtual machine
[Root @ target ~] # Ls/etc/libvirt/qemu
Networks node4.xml win8.xml
[Root @ target ~] # Virsh list -- all # node5 has been removed
Id name status
----------------------------------
18 node4 running
-Disable win8

3. Remote Management of virtual machines (qemu + ssh connection)
[Root @ target ~] # Yum install virt-viewer
[Root @ target ~] # Export DISPLAY = 192.168.40.18: 0.0
[Root @ target ~] # Virt-viewer-c qemu // system node4 # local VM management, system: Get system permissions. Note that qemu is followed by three/
[Root @ manager ~] # Virt-viewer-c qemu + ssh: // root@192.168.32.40/system node4 # remote linux managing virtual machines through virt-viewer + ssh
Xlib: extension "randroid" missing on display "192.168.40.18: 0.0 ".
Root@192.168.32.40's password:
Root@192.168.32.40's password:
# The gtk management interface of virt-viwer is displayed.

4. Use the existing virtual machine configuration document to install a new Virtual Machine
[Root @ target ~] # Qemu-img create-f qcow2/virhost/kvm_node/node6.img 20G
# Producing disk image files for new virtual machines

[Root @ target ~] # Virsh list
Id name status
----------------------------------
18 node4 running

[Root @ target ~] # Virsh dumpxml node4>/etc/libvirt/qemu/node6.xml
# Export the hardware configuration of the Virtual Machine node6 to/etc/libvirt/qemu/node6.xml.

[Root @ target ~] # Vim/etc/libvirt/qemu/node6.xml
<Domain type = 'kvm 'id = '20'> # modify the id number of node6
<Name> node6 </name> # name of the Virtual Machine node6
<Uuid> 4b7e91eb-6521-c2c6-cc64-c1ba72707fc7 </uuid> # the uuid must be modified. Otherwise, it will conflict with node4.
<Memory> 524288 </memory>
<Current memory> 524288 </currentMemory>
<Vcpu> 1 </vcpu>
<OS>
<Type arch = 'x86 _ 64' machine = 'rhel5. 4.0 '> hvm </type>
<Boot dev = 'network'/>
</OS>
<Features>
<Acpi/>
<Apic/>
<Pae/>
</Features>
<Clock offset = 'utc'/>
<On_poweroff> destroy </on_poweroff>
<On_reboot> restart </on_reboot>
<On_crash> restart </on_crash>
<Devices>
<Emulator>/usr/libexec/qemu-kvm </emulator>
<Disk type = 'file' device = 'disk'>
<Driver name = 'qemu' type = 'qcow2 'cache = 'none'/>
<Source file = '/virhost/node4.img'/> # specify the hard disk file of the new Virtual Machine
<Target dev = 'vda' bus = 'virtio '/>
</Disk>
<Interface type = 'bridge '>
<Mac address = '54: 52: 00: 69: d5: c7'/>
<Source bridge = 'br0'/>
<Target dev = 'vnet0'/>
<Model type = 'virtio '/>
</Interface>
<Interface type = 'bridge '>
<Mac address = '54: 52: 00: 69: d5: d7'/>
<Source bridge = 'br0'/>
<Target dev = 'vnet1'/>
<Model type = 'virtio '/>
</Interface>
<Serial type = 'PTY'>
<Source path = '/dev/pts/4'/>
<Target port = '0'/>
</Serial>
<Console type = 'PTY' tty = '/dev/pts/4'>
<Source path = '/dev/pts/4'/>
<Target port = '0'/>
</Console>
<Input type = 'mouse 'bus = 'ps2'/>
<Graphics type = 'vnc 'port = '000000' autoport = 'yes' keymap = 'en-us'/>
</Devices>
</Domain>

[Root @ target ~] # Virsh define/etc/libvirt/qemu/node6.xml
# Use the Virtual Description document to create a virtual machine. Use virsh edit node6 to modify the configuration file of node6.

[Root @ target ~] # Virsh start node6
# Starting a virtual machine

5. Enable vnc for virtual machines
[Root @ target ~] # Virsh edit node4 # edit the configuration file of node4. You are not advised to directly modify it through vim node4.xml.
<Graphics type = 'vnc 'port ='-1 'autoport = 'yes' listen = '2017. 0.0.1' keymap = 'en-us'/>
# Port = '-1' autoport = 'yes': port is automatically allocated. listen to the Loop Network (listen = '2017. 0.0.1' is required for virt-manager Management). No Password is required.
Change
<Graphics type = 'vnc 'port = '000000' autoport = 'no' listen = '0. 0.0.0' keymap = 'en-use' passwd = 'xiaoba'/>
# Fixed vnc Management port 5904, not automatically assigned, vnc password xiaobai, listen to all networks

2. Remote vnc access address: 192.168.32.40: 5904

V. storage pool and storage volume Management
1. Create a KVM host storage pool
1) create a storage pool based on folders
[Root @ target virhost] # virsh pool-define-as vmware_pool -- type dir -- target/virhost/vmware
# Define the storage pool vmware_pool
Or
[Root @ target virhost] # virsh pool-create-as -- name vmware_pool -- type dir -- target/virhost/vmware
# Create a storage pool vmware_pool. The type is file directory,/virhost/vmware. The result is the same as that of pool-define-.

2) create a storage pool based on the file system
[Root @ target virhost] # virsh pool-define-as -- name vmware_pool -- type fs -- source-dev/vg_target/LogVol02 -- source-format ext4 -- target/virhost/vmware
Or
[Root @ target virhost] # virsh pool-create-as -- name vmware_pool -- type fs -- source-dev/vg_target/LogVol02 -- source-format ext4 -- target/virhost/vmware

3). view the storage pool information
[Root @ target virhost] # virsh pool-info vmware_pool # view storage domain (pool) Information
Name: vmware_pool
UUID: 2e9ff708-241f-fd7b-3b57-25df273a55db
Status: running
Persistent: no
Automatic Start: no
Capacity: 98.40 GB
Allocated: 18.39 GB
Usable: 80.01 GB
4). Start the storage pool
[Root @ target virhost] # virsh pool-start vmware_pool # start the storage pool
[Root @ target virhost] # virsh pool-list
Name status starts automatically
-----------------------------------------
Default activity yes
Virhost activity yes
Vmware_pool activity no
5) destroy the storage domain and cancel the storage pool
[Root @ target virhost] # virsh pool-destroy vmware_pool # destroy the storage pool
[Root @ target virhost] # virsh pool-list -- all
Name status starts automatically
-----------------------------------------
Default activity yes
Virhost activity yes
Vmware_pool inactive no
[Root @ target virhost] # virsh pool-undefine vmware_pool # cancel storage pool Definition
[Root @ target virhost] # virsh pool-list -- all
Name status starts automatically
-----------------------------------------
Default activity yes
Virhost activity yes

2. After a storage pool is created, you can create a volume which is used as the hard disk of a virtual machine.
[Root @ target virhost] # virsh vol-create-as -- pool vmware_pool -- name node6.img -- capacity 10G -- allocation 1G -- format qcow2
# Create a volume node6.img. The storage pool is vmware_pool. The capacity is 10 GB and the initial allocation is 1 GB. The file format is qcow2.

[Root @ target virhost] # virsh vol-info/virhost/vmware/node6.img # view volume information
Name: node6.img
Type: File
Capacity: 10.00 GB
Allocation: 136.00 KB

3. Install a VM on a storage volume
[Root @ target virhost] # virt-install -- connect qemu: // system \
-N node7 \
-R 512 \
-F/virhost/vmware/node7.img \
-- Vnc \
-- OS-type = linux \
-- OS-variant = rhel6 \
-- Vcpus = 1 \
-- Network bridge = br0 \
-C/mnt/rhel-server-6.0-x86_64-dvd.iso

Vi. Migration of virtual machines (vmware 2 kvm)
1. Install software
[Root @ target ~] # Yum install-y virt-v2v.x86_64
[Root @ target ~] # Rpm-ivh libguestfs-winsupport-1.0-7.el6.x86_64.rpm virtio-win-1.2.0-1.el6.noarch.rpm
# The libguestfs-winsupport package is required for windows virtual machines to support NTFS file systems and virtio-win packages to support Windows para-virtualized alized storage and network device drivers.

2. Create a KVM Host Storage domain (omitted)
The virt-v2v needs to copy the migrated virtual machine to the pre-defined storage pool of the KVM host during the migration of the virtual machine.

3. Create a network interface for the KVM host (omitted)
After the VM is migrated, it will connect to the KVM host network. Therefore, the host must have a network interface that matches it, such as a bridge.

4. create or modify the $ HOME/. netrc file on the KVM host and add the username and password of the VMware ESXi server.
[Root @ target ~] # Cat ~ /. Netrc
Machine 192.168.2.20.login root password xxxxxx
[Root @ target ~] # Chmod 0600 ~ /. Netrc

5. Migration from Vmware ESXi to KVM
[Root @ target ~] # Virt-v2v-ic esx: // 192.168.2.20 /? No_verify = 1-op virhost-B br0 ipserver
** HEAD https: // 192.168.2.8.0/folder/tserver21/RHEL4.6-flat. vmdk? DcPath = ha-datacenter & dsName = ESX35-bak % 3Astorage1 ==> 401 Unauthorized
** HEAD https: // 192.168.2.8.0/folder/tserver21/RHEL4.6-flat. vmdk? DcPath = ha-datacenter & dsName = ESX35-bak % 3Astorage1 ==> 200 OK
** GET https: // 192.168.2.8.0/folder/tserver21/RHEL4.6-flat. vmdk? DcPath = ha-datacenter & dsName = ESX35-bak % 3Astorage1 ==> 200 OK (2084 s)
Unknown filesystem/dev/hda
Unknown filesystem/dev/fd0
Virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: rhel/4/kernel-smp-2.6.9-89.EL.i686.rpm
Virt-v2v: tserver21 for non-virtio driver Configuration

# All options can be specified by the configuration file/etc/virt-v2v.conf
#-Op: Specify the storage pool used for conversion, virhost
#-B: Specify the network as the bridge br0
#-Ic: Specifies the source address to be converted

[Root @ target kvm_node] # virsh list -- all
Id name status
----------------------------------
1 node4 running
-Disable node5
-Disable tserver21
-Disable win8
[Root @ target kvm_node] # virsh start tserver21

6. KVM to KVM migration
[Root @ target kvm_node] # virt-v2v-ic qemu + ssh: // 192.168.32.179/system-op virhost-B br0 node6
Root@192.168.32.179's password:
Root@192.168.32.179's password:
Unknown filesystem label SWAP-vda3.
Virt-v2v: The connected hypervisor does not support a machine type of rhel5.4.0. It will be set to the current default.
Virt-v2v: node6 for virtio driver Configuration

[Root @ target kvm_node] # virsh list -- all
Id name status
----------------------------------
1 node4 running
-Disable node5
-Disable node6
-Disable tserver21
-Disable win8

[Root @ target kvm_node] # virsh start node6

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.