I. Introduction to Xen
Xen is an open source virtual machine Monitor, developed by Cambridge University. It intends to run up to 128 fully functional operating systems on a single computer.
Xen is executed on older (no virtual hardware) processors, and the operating system must be explicitly modified ("ported") to run on Xen (but provide compatibility with user applications). This allows Xen to achieve high-performance virtualization without the need for special hardware support.
The Xen architecture is shown below:
The composition of the Xen virtual machine:
Xen Hypervisor:
Virtualization technology enables virtualization of the system by adding a thin layer of hypervisor (virtual Machine Monitor, VMM) software on existing platforms (machines), such as virtual processors, virtual memory managers (MMU), and virtual I/O systems. The hypervisor is also called the Regulatory Program (HYPERVISOR).
Domain (dom0,domu):
Privileged virtual machines: Dom0; privileged domain
Provide control tools for Xen; direct drive IO hardware device;
Interacting with Domu;
The Linux-2.6.37 kernel begins to run directly on the DOM0;
Other common virtual machines: domu; non privileged domain
linux-2.6.24+ kernel starts to support
How Xen virtualization is implemented:
type-i type : Two virtualization technology, directly on the hardware installation hypervisor, directly take over the hardware resources, the system running on it are virtual machines;
Management to the thorough; must let hypervisor drive the hardware,
The problem is that drivers must be developed?
Solution: Xen drives only CPU and memory, does not drive IO devices
When the system starts, the hardware device loads after loading hypervisor,hypervisor has access to hardware permissions, hypervisor load is completed after starting a virtual machine, virtual machine has its own kernel and user space, this virtual machine is a privileged virtual machine, Provides the management program for the underlying hypervisor, the function is to provide the IO device hardware driver for the hypervisor;
When our new virtual machine calls IO device hardware, it is connected to the hardware implementation through a privileged virtual machine driver;
For more detailed instructions please refer to Bowen: http://www.uml.org.cn/embeded/201303201.asp
Xen virtualization implements the conditions required to install on CentOS 6.6:
1. Need to install the CentOS 6.6 operating system on the physical machine
2). Install Xen Packages
3. Install the 3.7.10 kernel to enable Xen to run on the DOM0
4). Configure grub and then start Xen
5. Install the virtual machine guest OS on user space;
Note: The latest version of Xen is available only domu on the CentOS 6.6 platform, DOM0 is not implemented through CentOS 6.6 's original kernel, I am installing the xen-4.2.5 version of Xen and updating the kernel version to 3.7.10;
briefly introduce some Xen and KVM:
XEN:CENTOS4-CENTOS5 is supported; The following is Xen, but in the kernel version of Linux 2.6.37 will be included in the kernel, CentOS7 included in the kernel, can realize the kernel directly run on the DOM0;
kvm:2006, CentOS5.8 began to support, but instability, the acquisition of Red Hat led to the development of KVM;
The kernel of CentOS6.6 is 2.6.32 and does not directly support the kernel running Xen Dom0 privileged domain;
Can support running directly in Domu
Red Hat, to support the development of a KVM-friendly son, will get Xen out of the house. -_-!
Later realized that Xen usage is still very high, so it was included in the kernel later.
Two. Xen Installation and DOM0 configuration
1. Configure Xen Yum Source
# Vim/etc/yum.repos.d/xen4.repo
[XEN4]
name=xen4 for CentOS6
BASEURL=FTP://172.16.0.1/PUB/SOURCES/6. x86_64/xen4centos/x86_64/
gpgcheck=0
cleanup Yum cache operation
# yum Clean all
2. Install the xen-4.2.5 version of the software, update the kernel version to 3.7.10
# yum install-y xen-4.2.5 xen-libs-4.2.5 xen-runtime-4.2.5 kernel-xen
3. Configure grub.conf configuration file
Set Xen to Kernel startup and set the CentOS kernel as a module of the Xen virtualization platform;
# vim/etc/grub.conf default=0 timeout=5 splashimage= (hd0,0)/grub/splash.xpm.gz Hidde Nmenu title CentOS (3.7.10-1.el6xen.x86_64) root hd0,0 kernel/xen.gz dom0_mem=1024m,max:1024m Dom0_max_vcpus=1 dom0 _vcpus_pin cpufreq=xen module/vmlinuz-3.7.10-1.el6xen.x86_64 ro root=/dev/mapper/vg0-root rd_NO_LUKS rd_NO_DM LANG=en _us. UTF-8 rd_lvm_lv=vg0/swap rd_no_md Sysfont=latarcyrheb-sun16 crashkernel=auto rd_lvm_lv=vg0/root Keytable=us rhgb Crashkernel=auto Quiet rhgb quiet module/initramfs-3.7.10-1.el6xen.x86_64.img title CentOS 6 (2.6.32-5 04.el6.x86_64) root (hd0,0) kernel/vmlinuz-2.6.32-504.el6.x86_64 ro root=/dev/mapper/vg0-root rd_NO_LUKS rd_NO_DM LAN G=en_us. UTF-8 rd_lvm_lv=vg0/swap rd_no_md Sysfont=latarcyrheb-sun16 crashkernel=auto rd_lvm_lv=vg0/root Keytable=us rhgb Crashkernel=auto Quiet rhgb quiet initrd/initramfs-2.6.32-504.el6.x86_64.img
After the configuration is complete, reboot the Linux system and automatically enter the Xen DOM0 environment when the boot is complete:
To view the kernel version, has been upgraded to 3.7.10:
Copy Code code as follows:
# Uname-r
3.7.10-1.el6xen.x86_64
View the Xen xend service boot entry:
Copy Code code as follows:
# chkconfig--list xend
Xend 0:off 1:off 2:off 3:on 4:on 5:on 6:off
4. Start Xend Service
Copy Code code as follows:
To view the virtual machines that are now running:
Copy Code code as follows:
# XM List
Name ID Mem Vcpus State time (s)
Domain-0 0 1024 1 R-----21.9
Three. Setup and configuration of Xen virtual machines
After installing the dom0d virtual machine, we can configure the install Domu virtual machine on top of it.
Thinking about the installation of domu virtual machine:
1). Domu the kernel file storage location of the virtual machine.
We are building domu virtual machines on the DOM0, so our kernel files are stored on the DOM0;
The second way is for us to store the kernel files on the virtual disk of the domu virtual machine;
2). Domu Virtual Machine creation:
Kernel kernel,ramdisk are provided by DOM0, while Domu only provides user space;
Because the DOM0 virtual machine has full user space, we can copy it into the domu virtual machine for use;
1. Simple Domu Virtual institutions built
Kernel kernel and RAMDisk are provided by the DOM0, using BusyBox to provide simple user space for domu virtual machines;
1. Simulate the creation of a virtual machine disk device and build a local loopback file:
Create Virtual Disk storage directory:
# MKDIR/XEN/IMAGES-PV
Enter directory to create virtual machine disk device:
# cd/xen/images/
# dd If=/dev/zero of=./ Busybox.img bs=1m oflag=direct seek=2047 count=1
View device size, you can find the actual size is 1MB:
# ll-h Total
1.0M
-rw-r--r--1 Root root 2.0G Feb 6 20:15 busybox.img
2). format Virtual Disk device
# mke2fs-t ext4/xen/images/busybox.img
mke2fs 1.41.12 (17-may-2010)
/xen/images/busybox.img is isn't a block spec ial device.
Proceed anyway? (y,n) y #选择y
information slightly ...
Mount the disk device and wait for subsequent actions:
# Mount-o Loop/xen/images/busybox.img/mnt
3). Compile and install BusyBox provide user space for domu virtual machines
Compilation environment requires development package Group support:
# yum groupinstall-y Development Tools
# yum install-y
ncurses-devel glibc-static Get BusyBox Software:
busybox-1.22.1.tar.bz2
Compile installation busybox:
# tar XF busybox-1.22.1.tar.bz2
# CD busybox-1.22.1
# Make Menuconfig
The configuration is shown in the following illustration:
Select "Busybox Settings":
Select the Build Options option:
Create busybox with static binary, do not share library file:
When the configuration is complete, click Exit, and then click Yes to save:
Compile and install when Setup is complete:
# make && make install
After compiling the installation BusyBox completed in this directory will generate the installed file _install directory, we will copy the _install directory to the mounted virtual disk:
# cp-a _install/*/mnt
# cd/mnt
# ls
bin linuxrc lost+found sbin usr
# rm-rf linuxrc
# mkdir Dev PR OC sys lib/modules ETC/RC.D boot mnt media opt MISC-PV
At this point, the virtual disk is built.
4). Build Domu Virtual Machine
Create virtual Machine profile:
# cd/etc/xen
# vim busybox
kernel = "/boot/vmlinuz-2.6.32-504.el6.x86_64"
RAMDisk = " Boot/initramfs-2.6.32-504.el6.x86_64.img "
name =" BusyBox "
memory =" "
Vcpus = 1
disk = [' File :/xen/images/busybox.img,xvda,w ',]
root = "/dev/xvda ro"
extra = "Selinux=0 init=/bin/sh"
Initialize the following to create a virtual machine:
5). Switch virtual terminals to view virtual machine status
# XM list
Name ID Mem Vcpus State time (s)
Domain-0 0 1024 1 r-----197.9
BusyBox 2 1- b---- 1.4
The ID number will grow automatically after the shutdown starts, so let's test:
Turn off virtual machines:
# XM Destroy BusyBox
boot virtual machines:
# XM Create-c busybox
Then view the information of the virtual machine list to find the ID number automatically grow:
# XM list< C19/>name ID Mem Vcpus State time (s)
Domain-0 0 1024 1 r-----201.4
BusyBox 3 1- b---- 1.2
In the state status information, the options are detailed:
R:running, the virtual machine is in the running state
b:blocked, is blocked, the current virtual machine waits for the task to complete;
P:pause, paused state, the current virtual machine pauses in memory;
restore use unpause: #xm unpause Linux
S:shutdown, is in the process of shutting down, is shutting down the state c:crashed, is in the
crash process/crash after the restart process;
d:dying, is in the process of shutting down;
Time (s): Indicates the cumulative CPU length of the domain
6). Virtual machine joins network card device
Need to first set the bridge equipment Dom0, convenient Domu bridge to the DOM0;
Add network card Bridge device file:
# cd/etc/sysconfig/network-scripts/
# cp Ifcfg-eth0 ifcfg-xenbr0
configuration bridging device:
# VIM Ifcfg-xenbr0
device= "xenbr0"
bootproto= "static"
nm_controlled= "no"
onboot= "yes"
type= " Bridge "
ipaddr=172.16.31.1
netmask=255.255.0.0
gateway=172.16.0.1
Configuration network card device:
# VIM Ifcfg-eth0
device= "eth0"
bootproto= "static" Hwaddr= "08:00:27:16:d9:aa" nm_controlled= "
No"
onboot= "Yes"
Bridge= "Xenbr0"
type= "Ethernet"
userctl= "no"
configuration bridging mode requires the NetworkManager service to be turned off:
#chkconfig NetworkManager off
#service Restart the
Network service when NetworkManager stop configuration is complete:
#service network restart
As shown in the figure:
Login Terminal View Bridge Status:
# ifconfig eth0 Link encap:ethernet hwaddr 08:00:27:16:d9:aa inet6 addr:fe80::a00:27ff:fe16:d9aa/64 scope:link U
P Broadcast RUNNING multicast mtu:1500 metric:1 RX packets:37217 errors:0 dropped:7 overruns:0 frame:0 Tx packets:4541 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7641467 (7.2 MiB) TX bytes:773075 (754.9 KiB) Lo Link encap:local loopback inet addr:127.0.0.1 mask:255.0.0.0 inet6 addr::: 1/128 Scope
: Host up loopback RUNNING mtu:65536 metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:1032 (1.0 KiB) TX bytes:1032 (1.0 KiB) xenbr0 Link encap:ethernet hwaddr 08:00:27:16:d9:aa inet addr:172.16.3 1.1 bcast:172.16.255.255 mask:255.255.0.0 Inet6 addr:fe80::a00:27ff:fe16:d9aa/64 scope:link up BROADCAST MU
Lticast mtu:1500 metric:1 RX packets:1211 errors:0 dropped:0 overruns:0 frame:0 TX packets:90 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:116868 (114.1 KiB) TX bytes:15418 (15.0 KiB)
7. Associating the BusyBox virtual machine on this bridging device
Need to configure virtual machine configuration file not joined Bridge device name:
# vim/etc/xen/busybox
kernel = "/boot/vmlinuz-2.6.32-504.el6.x86_64"
RAMDisk = " Boot/initramfs-2.6.32-504.el6.x86_64.img "
name =" BusyBox "
memory ="
Vcpus = 1
disk = [' File:/xen /images/busybox.img,xvda,w ',]
root = "/dev/xvda ro"
extra = "selinux=0 init=/bin/sh"
vif = [' bridge= Xenbr0 ',]
If you want to busybox this user space to set up the network card, we also need to load Xen-netfront.ko into the virtual machine disk specified directory:
# XM Destroy BusyBox
# mount-o loop/xen/images/busybox.img/mnt
# ls/mnt
bin Dev lib media mnt proc SYS
boot etc lost+found misc opt sbin usr
we copy the Xen-netfront.ko module from DOM0 to the lib/modules/directory of the virtual machine disk:
# cd/lib/ modules/2.6.32-504.el6.x86_64/kernel/drivers/net/
need to view module dependencies:
# modinfo Xen-netfront.ko
filename: Xen-netfront.ko
alias: xennet
alias: xen:vif
License: GPL
Description : Xen virtual network device frontend
srcversion: 5c6fc78bc365d9af8135201 depends:
vermagic: 2.6.32-504.el6.x86_64 SMP mod_unload modversions
can be found without dependencies, we can directly use:
# CP xen-netfront.ko/mnt/lib/ modules/
Uninstall the virtual machine disk after replication completes:
# umount /mnt
Start BusyBox this virtual machine:
# XM Create-c busybox information .../# ifconfig-a lo Link encap:local loopback loopback mtu:65536 metric:1 RX Pack
ets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) #发现只存在lo环回设备, we have not loaded the Xen-netfront module, we add the network card module;/# Insmod/lib/modules/xen-netfront.ko initialising Xen Virtual
Ethernet driver. #再次查看所有网卡设备, you can find the network card device;/# ifconfig-a eth0 Link encap:ethernet hwaddr 00:16:3e:72:18:9b Broadcast multicast mtu:1500
Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX txqueuelen:1000 (0.0 B) TX bytes:0
(0.0 B) interrupt:247 Lo Link encap:local loopback loopback mtu:65536 metric:1 RX packets:0 errors:0 dropped:0
frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) #设置IPAddress:/# ifconfig eth0 172.16.31.100 up/# ifconfig eth0 Link encap:ethernet hwaddr 00:16:3e:72:18:9b inet addr:172.1 6.31.100 bcast:172.16.255.255 mask:255.255.0.0 up broadcast RUNNING multicast mtu:1500 metric:1 RX packets:137 Error
s:0 dropped:0 overruns:0 frame:0 Tx packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10127 (9.8 KiB) TX bytes : 0
(0.0 B) interrupt:247 #我们来进行ping测试/# ping 172.16.31.1-c 2 ping 172.16.31.1 (172.16.31.1): Data bytes The bytes from 172.16.
31.1:seq=0 ttl=64 time=2.955 ms bytes from 172.16.31.1:seq=1 ttl=64 the time=0.605 ms---172.16.31.1 ping statistics--- 2 packets transmitted, 2 packets received, 0% packet loss round-trip Min/avg/max = 0.605/1.780/2.955 ms
At this point, the creation of a simple virtual machine is complete.
Four. Implement Xen deployment CentOS 6.6 System
1. First close the BusyBox virtual machine we created earlier
# xm destroy busybox
We deploy the installation of CentOS 6.6 system, its core kernel and RAMDisk files are provided by the CD-ROM file, select the CD isolinux directory Vmlinuz and initrd.img files;
# ll-h Total
38M
-rw-r--r--1 root 34M Oct 22:12 initrd.img
-rw-r--r--1 root root 4.0M Oct 24 22:12 Vmlinuz
2. Create a virtual machine configuration file
# cd/etc/xen/#
CP busybox CENTOS6
# vim centos6
kernel = "/tmp/vmlinuz"
ramdisk = "/tmp/initrd.img"
name = "CentOS6"
memory = "Vcpus" =
1
disk = [' File:/xen/images/centos6.img,xvda,w ',]
root = "/dev/xvda ro"
vif = [' bridge=xenbr0 ',]
3. Create a disk IO device file for the virtual machine system
# cd/xen/images/
# qemu-img-xen create-f qcow2-o preallocation=metadata centos6.img 120G
formatting ' centos6.i MG ', Fmt=qcow2 size=128849018880 encryption=off cluster_size=65536 preallocation= ' metadata '
# ll-h Total
119M
-rw-r--r--1 root 2.0G Feb 6 16:01 busybox.img
-rw-r--r--1 root root 121G Feb 6 16:34 centos6.img
4. Since I built Xen virtualization on a virtual machine, I need to provide an environment that can install the CentOS system, I choose the HTTP service delivery system ISO storage, install the system through the URL;
Disc mounted on CentOS6.6 system:
# Mkdir/var/www/html/centos
# mount-t Iso9660/dev/sr0/var/www/html/centos
Start the default httpd service:
# service httpd start
Visit the test as shown in figure:
Current 1/2 page
12 Next read the full text