Introduction to the real-time migration of Xen virtual machines
Xen provides us with a powerful feature, which is the dynamic migration method. It allows domain to migrate domain to another Xen server at the expense of minimal service disruption during runtime.
the main benefits of using Xen dynamic migrations are listed below:
1.Xen dynamic migration, along with high-availability solutions such as heartbeat, can give us a "never-anchored" system. The latest version of Enterprise SUSE Linux Server and Red Hat Enterpriselinux also leverages Xen to provide a variety of high-availability solutions. You can easily meet the stringent requirements of a variety of services while ensuring that all critical business services are not interrupted.
2. It enables us to maintain a physical server hosting a virtual machine in a "cure-not-ill" manner. You can monitor the server and then move the system to resolve potential and suspicious problems immediately.
3. It makes it possible to achieve load balancing across multiple servers, enabling us to make better use of all computing resources in the enterprise to maximize their utilization. It is to be noted that the Xen open source version does not currently support the ability to automatically migrate dynamically when a fault is DOM0 on the sensor.
4. It makes it easier to add computing power to the system configuration when needed.
5. You can replace the hardware as needed without interrupting the service running on that hardware.
Just knowing that the benefits of dynamic migration are not enough, it's time to implement Xen dynamic real-time migrations.
Experiment Introduction:
1. There is an iSCSI shared storage and iSCSI storage is used by two Xen virtualization platforms;
2. The experimental environment has two Xen virtualization platforms, one of which is a simple busybox virtual machine with image files stored on iSCSI shared storage; Here I do a simple busybox virtual machine on two virtualization platforms;
3. Implement real-time migration of one of the BusyBox virtual machine instances between Xen virtualization platforms;
Experimental architecture diagram:
Experiment Realization:
I. Building iSCSI shared storage
1.iSCSI Server Build
Format disk:
# echo-n-E "n\np\n3\n\n+5g\nt\n3\n8e\nw\n" |fdisk/dev/sda
# partx-a/DEV/SDA
# fdisk-l/dev/sda3
disk/dev/sda3:5378 MB, Bytes 255 heads, 5378310144 sectors/track, 653 cylinders-Units
= cylinders of 16065 * The 8225280bytes
sector size (logical/physical): bytes/512 bytes
I/o size (minimum/optimal): Bytes/ 512bytes
Disk identifier:0x00000000
To install iSCSI server-side software:
# yum install -y scsi-target-utils
To edit the configuration file for an iSCSI server:
# vim/etc/tgt/targets.conf
#添加如下内容
<target iqn.2015-02.com.stu31:t1>
backing-store/dev/ Sda3
initiator-address 172.16.31.0/24
</target>
When the configuration is complete, you can start the iSCSI server:
# service tgtd start
To view shared devices:
# Tgtadm--lld iscsi-m target-o show
target 1:iqn.2015-02.com.stu31:t1
System information:
driver:iscsi
state:ready
i_t Nexus Information:
LUN information:
lun:0
type:controller
SCSI id:iet 00010000
SCSI sn:beaf10
size:0 MB, block size:1
online:yes
removable media:no Prevent Al:no
readonly:no
backing store type:null
backing store path:none
backing store Flags:
LUNs: 1
type:disk
scsi id:iet 00010001
scsi sn:beaf11
size:5378 MB, block size:512
online:y Es
removable media:no
prevent removal:no
readonly:no backing
store Type:rdwr
backing store Path:/dev/sda3
backing store Flags: Account
Information:
ACL information:
172.16.31.0/24
2.iSCSI Client Installation and configuration
Two Xen virtualization platform nodes install iSCSI client software:
#yum install -y iscsi-initiator-utils
To start an iSCSI client:
# service iSCSI Start
# service Iscsid start
Let the client discover the storage that the iSCSI server shares:
# iscsiadm-m discovery-t st-p172.16.31.3
172.16.31.3:3260,1 iqn.2015-02.com.stu31:t1
Registering iSCSI shared devices, node login,
# iscsiadm-m Node-tiqn.2015-02.com.stu31:t1-p 172.16.31.3-l
Logging in to [Iface:default, target:iqn.2015-02.com. STU31:T1, portal:172.16.31.3,3260] (multiple)
Login to [Iface:default, TARGET:IQN.2015-02.COM.STU31:T1, Portal: 172.16.31.3,3260] successful.
To view iSCSI storage:
# fdisk-l/dev/sdb
disk/dev/sdb:5378 MB, 5378310144 bytes 166, heads
, sectors/track 1020
= Cylinders of 10292 * (5269504bytes
sector size (logical/physical): bytes/512 bytes
I/o size (minimum/ Optimal): Bytes/512bytes
Disk identifier:0x00000000
After you log in, you can format the disks and partitions:
# echo-e "n\np\n1\n\n+2g\nw\n" |fdisk/dev/sdb
# partx-a/dev/sdb
View formatted disk:
# fdisk-l/DEV/SDB disk
/dev/sdb:5378 MB, Bytes 166 heads, 5378310144 sectors/track
, 1020 cylinders = Units of
10292 * 512 = 5269504bytes
Sector size (logical/physical): bytes/512 bytes
I/o size (minimum/optimal): bytes/512 Bytes
Disk identifier:0x8e1d9dd0
Device Boot Start end Blocks Id System
/dev/sdb1 1 409 2104683 Linux
Two. Client node Xen virtualized environment build
Since it is a two Xen virtualization node, I already have a ready-made OK xen virtualization node, we add a virtualization node, as a case to provide;
1. Configure Xen Yum Source
# Vim/etc/yum.repos.d/xen4.repo
[XEN4]
name=xen4 for CentOS6
BASEURL=FTP://172.16.0.1/PUB/SOURCES/6. x86_64/xen4centos/x86_64/
gpgcheck=0
empty existing Yum Library cache:
# yum Clean all
2. Install the xen-4.2.5 version of the software, update the kernel version to 3.7.10
# yum install -y xen-4.2.5 xen-libs-4.2.5xen-runtime-4.2.5 kernel-xen
3. Configure grub.conf configuration file
vim/etc/grub.conf default=0 timeout=5 (splashimage=) hd0,0 Hiddenmenu title CentOS (3.7.10-1.el6xen.x86_64) root (hd0,0) kernel/xen.gz dom0_mem=1024m,max:1024m Us=1 dom0_vcpus_pin cpufreq=xen module/vmlinuz-3.7.10-1.el6xen.x86_64 ro root=/dev/mapper/vg0-rootrd_no_luks Rd_NO_D M.utf-8 rd_lvm_lv=vg0/swap rd_no_mdsysfont=latarcyrheb-sun16 Crashkernel=auto rd_lvm_lv=vg0/root KEYBOARDTYPE=pc Keytable=us Rhgbcrashkernel=auto quiet rhgb quiet module/initramfs-3.7.10-1.el6xen.x86_64.img title CentOS 6 (2.6.32- 504.el6.x86_64) root (hd0,0) kernel/vmlinuz-2.6.32-504.el6.x86_64 ro Root=/dev/mapper/vg0-rootrd_no_luks Rd_NO_DM . UTF-8 rd_lvm_lv=vg0/swap rd_no_mdsysfont=latarcyrheb-sun16 Crashkernel=auto rd_lvm_lv=vg0/root KEYBOARDTYPE=pc Keytable=us Rhgbcrashkernel=auto quiet rhgb quiet initrd/initramfs-2.6.32-504.el6.x86_64.img
After the configuration is complete, reboot the Linux system and automatically enter the Xen DOM0 environment when the boot is complete:
To view the kernel version, has been upgraded to 3.7.10:
# uname -r
3.7.10-1.el6xen.x86_64
View the Xen xend service boot entry:
# chkconfig --list xend
Xend 0:off 1:off 2:off 3:on 4:on 5:on 6:off
4. Start Xend Service
# service xend start
To view the virtual machines that are now running:
# XM list
Name ID Mem Vcpus State time (s)
Domain-0 0 1024 1 r----- 23.7
Check out the Xen virtual machine information:
# XM Info host:test2.stu31.com release:3.7.10-1.el6xen.x86_64 version: #1 SMP Thu Feb 5 12:56
: CST2015 machine:x86_64 nr_cpus:1 nr_nodes:1 cores_per_socket:1 threads_per_core:1 cpu_mhz:2272 hw_caps:078bfbff:28100800:00000000:00000140:00000209:00000000:00000001:00000000 Virt_caps : total_memory:2047 free_memory:998 free_cpus:0 xen_major:4 xen_minor:2 xen_extr A:. 5-38.el6 xen_caps:xen-3.0-x86_64 xen-3.0-x86_32p xen_scheduler:credit xen_pagesize:4096 p latform_params:virt_start=0xffff800000000000 xen_changeset:unavailable Xen_commandline:dom0_mem=1024m,max: 1024mdom0_max_vcpus=1 dom0_vcpus_pin cpufreq=xen cc_compiler:gcc (gcc) 4.4.7 20120313 (Red hat4.4.7-11) cc_compile_ By:mockbuild cc_compile_domain:centos.org Cc_compile_date:tue 6 12:04:11 CST 2015 xend_config_format: 4
iSCSI client configuration discovers that the iSCSI server's shared storage is based on the above configuration (slightly configured);
three. Build virtual machines on Xen virtualized environments BusyBox
Only need to build in one of the nodes;
1. Use iSCSI shared storage as a disk storage path for virtual machines
Format shared storage:
# mke2fs-t EXT4/DEV/SDB1
Create a directory, mount the store:
# mkdir/scsistore
# mount/dev/sdb1/scsistore/
Enter directory create virtual machine disk device:
# cd/scsistore
# dd If=/dev/zero of=./busybox.img bs=1moflag=direct seek=1023 count=1
Viewing the device size, you can find that the actual size is 1MB:
# ll-h Total
1.1M
-rw-r--r--1 root 1.0G Feb 6 20:17 busybox.img
drwx----- -2 root root 16K Feb 6 20:05 lost+found
formatted virtual disk device:
# mke2fs-t ext4/scsistore/busybox.img mke2fs
(17-may-2010)
/scsistore/busybox.img is not a blockspecial device.
Proceed anyway? (y,n) y
information slightly ...
Mount the virtual disk device and wait for subsequent actions:
# Mount-o Loop/scsistore/busybox.img/mnt
2. Compile and install BusyBox
Build the development Package group installation required by the environment:
# yum groupinstall-y developmenttools
# yum install-y
ncurses-devel glibc-static Get BusyBox Software:
busybox-1.22.1.tar.bz2
Compile installation busybox:
# tar XF busybox-1.22.1.tar.bz2
# CD busybox-1.22.1
# Make Menuconfig
The configuration is shown in the following illustration:
Compile and install when Setup is complete:
# make && make install
After compiling the installation BusyBox completed in this directory will generate the installed file _install directory, we will copy the _install directory to the mounted virtual disk:
# cp-a _install/*/mnt
# cd/mnt
# ls
bin linuxrc lost+found sbin usr
# rm-rf linuxrc
# mkdir D EV proc SYS lib/modules etc/rc.dboot mnt media opt MISC-PV
At this point, the virtual disk is built.
3. Virtualization Platform Node Bridge equipment construction
All two nodes need to be configured;
1). test1 node:
Add network card Bridge device file:
# cd/etc/sysconfig/network-scripts/
# cp Ifcfg-eth0 ifcfg-xenbr0
configuration bridging device:
# VIM Ifcfg-xenbr0
device= "xenbr0"
bootproto= "static"
nm_controlled= "no"
onboot= "yes"
type= " Bridge "
ipaddr=172.16.31.1
netmask=255.255.0.0
gateway=172.16.0.1
Configuration network card device:
# VIM Ifcfg-eth0
device= "eth0"
bootproto= "static" Hwaddr= "08:00:27:16:d9:aa" nm_controlled= "
No"
onboot= "Yes"
Bridge= "Xenbr0"
type= "Ethernet"
userctl= "No"
2). Test2 Node Configuration:
Add network card Bridge device file:
# cd/etc/sysconfig/network-scripts/
# cp Ifcfg-eth0 ifcfg-xenbr0
configuration bridging device:
# VIM Ifcfg-xenbr0
device= "xenbr0"
bootproto= "static"
nm_controlled= "no"
onboot= "yes"
type= "Bridge"
ipaddr=172.16.31.2
netmask=255.255.0.0
gateway=172.16.0.1
Configuration network card device:
# VIM Ifcfg-eth0
device= "eth0"
bootproto= "static" Hwaddr= "08:00:27:6a:9d:57" nm_controlled= "
No"
onboot= "Yes"
Bridge= "Xenbr0"
type= "Ethernet"
userctl= "No"
3. Configuring bridging mode requires the NetworkManager service to be turned off:
All two nodes need to operate
#chkconfig NetworkManager off
#service Restart the
Network service after NetworkManager stop configuration:
#service Network Restart
4). Login terminal to view bridge status:
# ifconfig eth0 Link encap:ethernet hwaddr08:00:27:16:d9:aa inet6 addr:fe80::a00:27ff:fe16:d9aa/64 scope:link
Up broadcast RUNNING multicast mtu:1500 metric:1 RX packets:37217 errors:0 dropped:7 overruns:0 frame:0 TX packets:4541 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX txqueuelen:1000 (bytes:7641467 MiB) txbytes:773075 (754.9 KiB) Lo Link encap:local loopback inet addr:127.0.0.1 mask:255.0.0.0 inet6 addr::: 1
/128 scope:host loopback RUNNING mtu:65536 metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:1032 (1.0 KiB) txbytes:1032 (1.0 KiB) xenbr0 Link encap:ethernet Hwaddr08:00:27:16:d9:aa inet 6.31.1 bcast:172.16.255.255 mask:255.255.0.0 inet6 addr:fe80::a00:27ff:fe16:d9aa/64 scope:link up BROADCAST RUN NING multicast mtu:1500 metric:1 RX packets:1211 errors:0 dropped:0 overruns:0 frame:0
TX packets:90 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:116868 (114.1 KiB) txbytes:15418 (15.0 KiB)
4. Build BusyBox Virtual Machine
Create a virtual machine configuration file
# vim/etc/xen/busybox
kernel = "/boot/vmlinuz-2.6.32-504.el6.x86_64"
RAMDisk = "/boot/ Initramfs-2.6.32-504.el6.x86_64.img "
name =" BusyBox "
memory ="
Vcpus = 1
disk =[' file:/" Scsistore/busybox.img,xvda,w ',]
root = "/dev/xvda ro"
extra = "selinux=0 init=/bin/sh"
vif = [' bridge= Xenbr0 ',]
On_crash = "Destroy"
On_reboot = "Destroy"
On_shutdown = "Destroy"
Copy the configuration file to the node Test2:
# scp /etc/xen/busybox root@172.16.31.2:/etc/xen/
If you want to busybox this user space can set the network card, we also need to load Xen-netfront.ko into the virtual machine disk designated directory;
We copy the Xen-netfront.ko module in DOM0 into the lib/modules/directory of the virtual machine disk:
# cd /lib/modules/2.6.32-504.el6.x86_64/kernel/drivers/net/
Need to view module dependencies:
# modinfo Xen-netfront.ko
filename: xen-netfront.ko
alias: xennet
alias: xen:vif
License: GPL
Description: Xen virtual network device frontend
srcversion: 5c6fc78bc365d9af8135201
depends:
vermagic: 2.6.32-504.el6.x86_64 SMP mod_unloadmodversions
Can be found without dependencies, we can directly use:
# cp xen-netfront.ko /mnt/lib/modules/
Uninstall the virtual machine disk after replication completes:
# umount /mnt
At this point, our virtual machine busybox to create a complete!
5. I will uninstall Scsistore to another Xen virtualization platform Mount View test:
Uninstall at Test1 node:
[root@test1 xen]# umount /scsistore/
iSCSI shared storage found on the TEST2 node:
[Root@test2 ~]# iscsiadm-m discovery-t st-p 172.16.31.3 starting iscsid
: [OK]
172.16.31.3:3260,1 iqn.2 015-02.com.stu31:t1
Registering iSCSI shared devices, node login,
[Root@test2 ~]# iscsiadm-m node-tiqn.2015-02.com.stu31:t1-p 172.16.31.3-l
[root@test2 ~]# fdisk-l/dev/sdb
Di sk/dev/sdb:5378 MB, Bytes 166 heads, 5378310144 sectors/track
, 1020 cylinders = Units of
10292 * 51 2 = 5269504bytes
sector size (logical/physical): bytes/512 bytes
I/o size (minimum/optimal): bytes/512b Ytes
Disk identifier:0x8e1d9dd0
Device Boot Start end Blocks Id System
/dev/sdb1 1 409 2104683 Linux
To view the contents of a disk after mounting the disk:
[Root@test2 ~]# mkdir/scsistore
[root@test2 ~]# mount/dev/sdb1/scsistore/
[root@test2 ~]# ls/scsistore/< C20/>busybox.img Lost+found
You can find that the files are shared and our iSCSI shared storage is normal.
four. Virtual machine real-time migration test for two virtualization platforms
1. Two virtualization nodes Mount shared storage
Test1 node:
[Root@test1 ~]# mount/dev/sdb1/scsistore/
[root@test1 ~]# ls/scsistore/
busybox.img
test2 node:
[Root@test2 ~]# mount/dev/sdb1/scsistore/
root@test2 ~]# ls/scsistore/busybox.img
lost+found
2. Start the virtual machine busybox
Test1 node started:
[root@test1 ~]# xm create-c busybox Using config file '/etc/xen/busybox '. Started domain BusyBox (id=13) initializingcgroup Subsys cpuset initializing cgroup subsys CPU Linux versio n 2.6.32-504.el6.x86_64 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red hat4.4.7-11) (gcc)) #1 SMP W Ed Oct 04:27:16 UTC 2014 Command line:root=/dev/xvda ro selinux=0init=/bin/sh #信息略 ...
Load the NIC module;/# Insmod/lib/modules/xen-netfront.ko initialising Xen Virtual Ethernet driver. #设置网卡ip地址/# ifconfig eth0 172.16.31.4 up/# ifconfig eth0 Link encap:ethernet hwaddr00:16:3e:49:e8:18 inet Add r:172.16.31.4 bcast:172.16.255.255 mask:255.255.0.0 up broadcast RUNNING multicast mtu:1500 RX Metric:1:
errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2942 (2.8 KiB) TXbyte s:0
(0.0 B) interrupt:247
Test2 node started:
[Root@test2 ~]# xm create-c busybox Using config file '/etc/xen/busybox '. Started domain BusyBox (id=2) initializingcgroup Subsys cpuset initializing cgroup Subsys CPU Linux version 2.6.32-504.el6.x86_64 (mockbuild@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red hat4.4.7-11) (gcc)) #1 SMP We
D Oct 04:27:16 UTC 2014 Command line:root=/dev/xvda ro selinux=0init=/bin/sh #信息略 ... Ext4-fs (XVDA): mounted filesystem withordered Data mode. Opts:dracut:Mounted root Filesystem/dev/xvda dracut:switching root/bin/sh:can ' t access tty;
Job controlturned off/# ifconfig/# Insmod/lib/modules/xen-netfront.ko initialising Xen Virtual Ethernet driver. /# ifconfig eth0 172.16.31.5 up/# ifconfig eth0 Link encap:ethernet hwaddr00:16:3e:41:b0:32 inet addr:172.16.31 .5 bcast:172.16.255.255 mask:255.255.0.0 up broadcast RUNNING multicast mtu:1500 metric:1 RX packets:57 errors:0
dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 Overruns: 0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3412 (3.3 KiB) txbytes:0
(0.0 B) interrupt:247
Tip: Press CTRL +] to exit the virtual machine
Here I just want to test two virtualization platform can run virtual machine, so have built BusyBox this virtual machine, but we migrate only need one busybox, we migrate BusyBox to test2 node on test1 node.
Let's turn all the virtual machines off:
#xm destory busybox
3. Configure node real-time migration
Configuration of test node:
[Root@test1 ~]# grep xend/etc/xen/xend-config.sxp |grep-v "#"
(xend-http-server Yes)
(xend-unix-server Yes)
(xend-relocation-server Yes)
(Xend-relocation-port 8002)
(xend-relocation-address ' 172.16.31.1 ')
(Xend-relocation-hosts-allow ')
Configuration of the Test2 node:
[Root@test2 ~]# grep xend/etc/xen/xend-config.sxp |grep-v ' # '
(xend-http-server Yes)
(xend-unix-server Yes) c11/> (xend-relocation-server Yes)
(Xend-relocation-port 8002)
(xend-relocation-address ' 172.16.31.2 ')
(Xend-relocation-hosts-allow ')
Two virtualization nodes restart the Xend service:
# service Xend Restart
stopping xend daemon: [OK]
starting xend-daemon: [OK]
To view the listening port:
# SS-TUNL |grep 8002
tcp LISTEN 0 5 172.16.31.2:8002 *:*
Start Node Test1 busybox:
#xm create-c busybox
#信息略 ...
#设置IP地址, so that after a while migrate to see the basis;
/# Insmod/lib/modules/xen-netfront.ko
initialising Xen Virtual Ethernet driver.
/# ifconfig eth0 172.16.31.4 up
/# ifconfig
eth0 Link encap:ethernet hwaddr00:16:3e:28:bb:f6 inet addr
: 172.16.31.4 bcast:172.16.255.255 mask:255.255.0.0 up
broadcast RUNNING multicast mtu:1500
RX packets:921 errors:0 dropped:0 overruns:0 frame:0
TX packets:2
errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000
RX bytes:78367 (76.5 KiB) txbytes:84 (84.0 B)
interrupt:247
To migrate the virtual machines of the Test1 node to the TEST2 node:
[root@test1 ~]# xm migrate -l busybox 172.16.31.2
View the list of virtual machines for the Test1 node after the migration is complete:
[Root@test1 ~]# XM list
Name ID Mem Vcpus State time (s)
Domain-0 0 1023 1 R----- 710.1
View the virtual machines for the Test2 node after the migration is complete:
[Root@test2 network-scripts]# XM list
Name ID Mem Vcpus State time (s)
Domain-0 0 1023 1 r----- 142.8
[root@test2 network-scripts]# xm list
Name ID Mem vcpus State Time (s)
Domain-0 0 1023 1 r----- 147.4
busybox 3 0 --p--- 0.0
Connect to Test2 virtual machine view:
[Root@test2 ~]# XM console busybox
Using NULL Legacy PIC
changing capacity of (0) to 2097152sectors
Chang ing capacity of (0) to 2097152sectors
/# ifconfig
eth0 Link encap:ethernet hwaddr00:16:3e:1d:38:69
inet addr:172.16.31.4 bcast:172.16.255.255 mask:255.255.0.0
Up broadcast RUNNING multicast mtu:1500 metric:1
RX packets:350 errors:0 dropped:0 overruns:0 frame:0
TX Packets : 2 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:27966 (27.3 KiB) txbytes:84 ( 84.0 B)
interrupt:247
It can be found that the address of the virtual machine migrated to the Test2 node is test1, and that our migration is complete.
So far, the real time migration experiment of our Xen virtualization platform is done.