I. Creating and managing virtual machines using Virt-manager
1. Use the VNC Viewer connection to enter the virtualization platform host
2. Open the terminal input Virt-manager command to start the Virt-manager Virtual Machine management interface
#virt-manager
3. Install CentOS 6.6 virtual machines via Virt-manager
Click on the icon to create a new virtual machine:
Choose PXE boot, there is a system Automation deployment server in My network:
Select the operating system type and version:
Set the number of memory and CPUs:
Set the size of the hard disk, which uses dynamic extended disk space:
Ignore this error, because it is a virtual disk, do not worry about space, only to ensure that the system space does not exceed the physical disk space:
Check the option to view the configuration before installing:
We can set it on this interface, I'm not setting it here, just click Begin installation:
We chose to install a basic system:
Into the installation:
You can observe the Virt-manager interface during installation:
Knowing that the virtual machine is running, you can view the CPU usage:
After the installation is completed as shown;
Turn off the virtual machine and turn the virtual machine off by entering the shutdown command on the virtual machine;
Based on the Virt-manager to create the management virtual machine is complete, very simple.
We use the Cirros lightweight Linux system for the following experiments.
Two. The network model of the KVM virtualization platform
1. Network Model Introduction
The general Virtual Machine Virtual network Setup mainly includes three kinds of methods. Mainly as follows:
Nat Mode
It is also claimed that this mode is host mode. In this mode, the virtual machine can be understood as having no independent network card. All requests for access to the virtual machine are actually sent directly to the host and then forwarded to the virtual machine by accessing the host hosts. The corresponding virtual machine accesses other networks, which are forwarded to the host and then forwarded out. For a network other than the host, it is not known that the virtual machine exists.
Bridge mode
Bridge mode is the use of a more mode, it is the virtual machine has its own independent network card and IP, and then through the use of the host's network card external connections. It used the host network card as a bridge, through the bridge to connect the world outside the net. In this mode, it can be simply understood that the virtual machine and host are two different machines, and have independent IP to access each other. For IP acquisition of virtual machines, it can be directly specified or obtained by DHCP.
Internal mode (host-only)
This is to isolate the network between the virtual machines and the host network. Virtual machine is a network, host is also a network, each other can not access each other.
Bridging model Many of the examples we used earlier, I will not introduce here, I would like to highlight the Host-only model and NAT model.
2.host-only Mode Instance
1. We create a host-only bridge device that separates the network from the virtual machine from the KVM virtualization platform host;
# brctl addbr isolationbr
To view bridge equipment:
# Brctl Show Bridge
name Bridge ID STP enabled interfaces
br0 8000.000c293e6326 Yes eth0
isolationbr 8000.000000000000 no
virbr0 8000.525400305441 Yes Virbr0-nic
But this bridge device is not active, we need to activate the bridge device using IP command:
# ip link set isolationbr up
To view our bridge equipment after activation:
# IP link Show 1:lo: <LOOPBACK,UP,LOWER_UP> mtu65536 qdisc noqueue State UNKNOWN Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00 2:eth0:<broadcast,multicast,up,lower_up> MTU 1500 Qdisc pfifo_fast State up qlen1000 link/ Ether 00:0c:29:3e:63:26 BRD ff:ff:ff:ff:ff:ff 3:br0:<broadcast,multicast,up,lower_up> MTU 1500 Qdisc noqueue State UNKNOWN link/ether 00:0c:29:3e:63:26 BRD ff:ff:ff:ff:ff:ff 4:virbr0:<broadcast,multicast,up,lower_up> MTU 1500 Qdisc noqueue State UNKNOWN link/ether 52:54:00:30:54:41 BRD ff:ff:ff:ff:ff:ff 5:virbr0-nic: <broadcast,multica ST>MTU 1500 Qdisc noop State down Qlen/link/ether 52:54:00:30:54:41 BRD FF:FF:FF:FF:FF:FF
Adcast,multicast,up,lower_up> MTU 1500 Qdisc noqueue State UNKNOWN link/ether 6e:5e:8d:39:56:b5 BRD ff:ff:ff:ff:ff:ff 17:vnet1:<broadcast,multicast,up,lower_up> MTU 1500 Qdisc pfifo_fast State Unknownqlen Link/ether D:F4:A3 BRD FF:FF:FF:FF:FF:FF 18:vnet2:<broadcast,multicast,up,lower_up> MTU 1500 Qdisc pfifo_fast State Unknownqlen Link/ether: 1f:7d BRD FF:FF:FF:FF:FF:FF
2. Start two virtual machines:
First Cirros virtual machine:
Copy Code code as follows:
# qemu-kvm-m 128-name cirros1-drive file=/kvm/images/cirros-0.3.0-x86_64-disk.img,media=disk,format=qcow2,if=ide- NET Nic-net tap,ifname=vnet1,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown-boot c-daemonize
Login from Vncviewer after startup as shown:
The second Cirros virtual machine, which needs to specify the MAC address when starting;
Copy Code code as follows:
# qemu-kvm-m 128-name cirros2-drive file=/kvm/images/cirros-0.3.0-x86_64-disk2.img,media=disk,format=qcow2,if=ide- NET Nic,macaddr=52:54:00:65:43:21-net tap,ifname=vnet2,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown-boot C- Daemonize
Login from Vncviewer after startup as shown:
To view the IP addresses of two virtual machines:
Use ping to test the connectivity of two virtual machines:
Now it's connected!
After we start two virtual machines, our Vnet1 and Vnet2 network cards are bridged on the br0;
# Brctl Show Bridge
name Bridge ID STP enabled interfaces
br0 8000.000c293e6326 Yes eth0
vnet1
vnet2
isolationbr 8000.000000000000 no
virbr0 8000.525400305441 Yes virbr0-nic
3. We now Vnet1 and Vnet2 Bridge to the ISOLATIONBR:
Remove the Vnet1 and Vnet2 from the bridging device Br0 first:
# brctl Delif br0 vnet1
# brctl Delif br0 Vnet2
Now look at the network card for the bridging device and the network card for the two virtual machines is not bridged on the bridging device BR0:
# Brctl Show Bridge
name Bridge ID STP enabled interfaces
br0 8000.000c293e6326 Yes eth0
isolationbr 8000.000000000000 no
virbr0 8000.525400305441 Yes Virbr0-nic
Let's go to two virtual machines for ping connectivity testing:
Now the connectivity of the virtual machine is not going through.
Below we bridge the Vnet1 and Vnet2 network cards to the bridge equipment we just created ISOLATIONBR:
# brctl AddIf isolationbr vnet1
# brctl addif ISOLATIONBR Vnet2
To view network adapter connections for bridging devices:
# Brctl Show Bridge
name Bridge ID STP enabled interfaces
br0 8000.000c293e6326 Yes eth0
isolationbr 8000.3ace491df4a3 no vnet1
vnet2 virbr0 8000.525400305441 Yes virbr0-nic
Our virtual machine's two network adapters have been connected to the ISOLATIONBR bridge equipment;
Let's go to the virtual machine and test connectivity:
Now the two virtual machines are in the same network, you can achieve communication, but with the host is isolated from each other, our virtual machine and host the connectivity between the connection is not connected. If we need to implement the communication between the virtual machine and the host, we need to open the NAT model, and the NAT model is described below.
3.NAT Model Instance
In fact, it is to configure the host of the Host-only network with the external host communication experiments, open the bridge equipment Nat function.
1. The address of our virtual machine is assigned through the DHCP server within the network, we set up the address of two virtual machines and the ISOLATIONBR of bridge equipment manually for experiment.
The IP settings for the two virtual machines are as follows:
The IP settings of the bridge device Isolationbr are as follows:
[Root@createos ~]# ifconfig ISOLATIONBR 10.0.0.254/8 up
[Root@createos ~]# ifconfig ISOLATIONBR] ISOLATIONBR
Link encap:ethernet hwaddr 3a:ce:49:1d:f4:a3
inet addr:10.0.0.254 bcast:10.255.255.255 mask:255.0.0.0
Inet6 ADDR:FE80::6C5E:8DFF:FE39:56B5/64 scope:link up
broadcast RUNNING multicast mtu:1500
RX metric:1 errors:0 dropped:0overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:28 (28.0 B) TX bytes:468 (468.0 b)
To test network connectivity between virtual machines after Setup completes:
We will connect the gateway of the virtual machine to the ISOLATIONBR Bridge device address to the host:
2. We still cannot communicate with the real gateway 172.16.0.1 in the physical network, we need to open the host's Routing and forwarding function:
# sysctl-w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
Let's first ping the gateway through the virtual machine, as shown in figure:
In the ping connectivity test, we then open the scratch function on the host to view the packet:
# tcpdump-i eth0 icmp-nn
tcpdump:WARNING:eth0:no IPv4 addressassigned tcpdump:verbose
output suppressed, use -VOR-VV for full protocol decode
listening on eth0, Link-type EN10MB (Ethernet), capture size 65535 bytes
2.377558 IP 10.0.0.2 >172.16.0.1:icmp echo request, id 1793, SEQ 0, length
10:24:53.384063 IP 10.0.0.2 >172. 16.0.1:ICMP echo request, ID 1793, SEQ 1, Length 64
Data packets can reach the gateway device, but the packet cannot be returned.
We need to unlock the host's routing function and set the NAT mode in the firewall:
# iptables-t nat-a postrouting-s 10.0.0.0/8! -D 10.0.0.0/8-j Masquerade
# iptables-t nat-l postrouting Chain postrouting
(policy ACCEPT)
target Pro T opt source destination
Masquerade TCP--192.168.122.0/24 !192.168.122.0/24 masq ports:1024-65535
Masquerade UDP--192.168.122.0/24 !192.168.122.0/24 masq ports:1024-65535 all
-- 192.168.122.0/24 !192.168.122.0/24
Masquerade All--10.0.0.0/8 !10.0.0.0/8
Start ping test from virtual machine to reach Real gateway connectivity:
At the same time began to grasp the bag, we are in the host eth0 and bridge equipment ISOLATIONBR must grasp the bag:
ISOLATIONBR of bridging equipment:
# tcpdump-i Isolationbr-nn
tcpdump:verbose output suppressed USE-VOR-VV for full protocol decode
O n Isolationbr, Link-type en10mb (Ethernet), capture size 65535 bytes 10:35:35.391069
IP 10.0.0.2 >172.16.0.1:icmp E Cho request, id 2305, seq 0, length
10:35:35.393619 ARP, request who-has10.0.0.2 tell 10.0.0.254, length
10: 35:35.395095 ARP, Reply 10.0.0.2 is-at52:54:00:65:43:21, length
10:35:35.395137 IP 172.16.0.1 >10.0.0.2:icmp E Cho reply, id 2305, seq 0, length
10:35:36.394760 IP 10.0.0.2 >172.16.0.1:icmp echo request, id 2305, seq 1, le Ngth
10:35:36.395943 IP 172.16.0.1 >10.0.0.2:icmp Echo reply, id 2305, seq 1, length
10:35:41.426182 ARP , Request who-has10.0.0.254 tell 10.0.0.2, length
10:35:41.427695 ARP, Reply 10.0.0.254 is-at3a:ce:49:1d:f4:a3, L Ength 28
You can see that the virtual machine's request has reached the gateway, and the Gateway has replied; The address translation is not shown here, but it can be guessed that the virtual machine's request is sent to the gateway via NAT address translation eth0.
Host's eth0 Grab bag:
# tcpdump-i eth0 icmp-nn
tcpdump:WARNING:eth0:no IPv4 addressassigned tcpdump:verbose
output suppressed, use -VOR-VV for full protocol decode
listening on eth0, Link-type EN10MB (Ethernet), capture size 65535 bytes
5.392027 IP 172.16.31.7 >172.16.0.1:icmp echo request, id 2305, seq 0, length
10:35:35.393361 IP 172.16.0.1 172.16.31.7:icmp Echo reply, id 2305, seq 0, length
10:35:36.395052 IP 172.16.31.7 >172.16.0.1:icmp Echo requ EST, id 2305, seq 1, length
10:35:36.395860 IP 172.16.0.1 >172.16.31.7:icmp Echo reply, id 2305, seq 1, length 64
The host's eth0 the request of the virtual machine through the NAT function to the gateway request to reply;
3. The above steps can be realized through the script Automation Oh!
Install the DNSMASQ software to provide the virtual machine with DHCP service to automatically assign IP addresses:
# yum install -y dnsmasq
Note: Since our KVM platform has a vibrd0 network card, it automatically starts the DNSMASQ service, and if we use the NAT model, we need to shut down the DNSMASQ service if we do not use this NIC.
Copy Code code as follows:
# Ps-ef | grep "DNSMASQ" |grep-v "grep"
Nobody 6378 1 0 11:49? 00:00:00/usr/sbin/dnsmasq--strict-order--pid-file=/var/run/libvirt/network/default.pid--conf-file=-- Except-interface lo--bind-interfaces--listen-address 192.168.122.1--dhcp-range 192.168.122.2,192.168.122.254-- Dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases--dhcp-lease-max=253--dhcp-no-override--dhcp-hostsfile=/ Var/lib/libvirt/dnsmasq/default.hostsfile--addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
Turn off the DNSMASQ service:
#kill 6378
NAT Model Script Example:
A script that opens the NAT function;
#vim/etc/qemu-natup #!/bin/bash bridge=isolationbr network=10.0.0.0 gateway=10.0.0.254 NETMASK=255.0.0.0 DHCPRANGE= 10.0.0.1,10.0.0.100 tftproot= bootp= function Check_bridge () {if brctl show | grep "^bridge" &>/dev/null;then re Turn 1 else return 0 fi} function Create_bridge () {brctl addbr ' bridge ' brctl stp ' bridge ' on Brctl setfd ' Bridg E "0 ifconfig" "$BRIDGE" "GATEWAY" netmask "$NETMASK" Up} function Enable_ip_forward () {echo 1 >/proc/sys/net/ipv4 /ip_forward} function Add_filter_rules () {iptables-t nat-a postrouting-s "$NETWORK"/"$NETMASK"!-d "$NETWORK"/"$NET MASK "-j Masquerade} function start_dnsmasq () {ps-ef | grep" DNSMASQ "|grep-v" grep "&>/dev/null if [$?-eq 0 ];then Echo ' WARNING:DNSMASQ is already running ' return 1 fi dnsmasq--strict-order--except-interface=lo--interface=$ Bridge--listen-address= $GATEWAY--bind-interfaces--dhcp-range= $DHCPRANGE--conf-file= ""--pid-file=/var/run/ qemu-dhcp-$BRIDGE. PID--dhcp-leasefile=/var/run/qemu-dhcp-$BRIDGE. Leases--dhcp-no-override ${tftproot:+ "--enable-tftp"}${tftproot:+ "--tftp-root= $TFTPROOT" }${bootp:+ '--dhcp-boot= $BOOTP}} function Setup_bridge_nat () {check_bridge ' $BRIDGE ' if [$-eq 0];then CREATE_BRIDG e fi enable_ip_forward add_filter_rules "$BRIDGE" START_DNSMASQ "$BRIDGE"} If [-N "$"];then Setup_bridge_nat Ifcon Fig "$" 0.0.0.0 up brctl addif "$BRIDGE" "$" exit 0 else Echo "Error:no interfacespecified" Exit 1 fi
To turn off the NAT feature and remove the virtual network card script from the bridge device:
#vim/etc/qemu-natdown
#!/bin/bash
bridge= "ISOLOTIONBR"
if [-N "$"];then
IP link set
down Brctl delif "$BRIDGE"
IP link Set "$BRIDGE" Down
brctl delbr "$BRIDGE"
iptables-t nat-f
exit 0
E LSE
echo "Error:no interface Specified"
exit 1
fi
To set script execution permissions:
# chmod +x/etc/qemu-natup
# chmod +x/etc/qemu-natdown
Start the first virtual machine:
Copy Code code as follows:
# qemu-kvm-m 128-name cirros1-drive file=/kvm/images/cirros-0.3.0-x86_64-disk.img,media=disk,format=qcow2,if=ide- NET Nic-net tap,ifname=vnet1,script=/etc/qemu-natup,downscript=/etc/qemu-natdown-boot c-daemonize
Let's go check the DNSMASQ service startup or not:
# Ps-ef | grep "DNSMASQ" |grep-v "grep"
nobody 38355 1 0 11:49? 00:00:00 dnsmasq--strict-order--except-interface=lo--INTERFACE=ISOLATIONBR--listen-address=10.0.0.254-- Bind-interfaces--dhcp-range=10.0.0.1,10.0.0.100--conf-file=--pid-file=/var/run/ Qemu-dhcp-isolationbr.pid--dhcp-leasefile=/var/run/qemu-dhcp-isolationbr.leases--dhcp-no-override
To view the network card devices on the host:
# ifconfig |grep-ei "(Vnet1|vnet2)"
vnet1 link encap:ethernet hwaddr16:85:a7:5c:84:9d
vnet2 Link Encap:ethernet hwaddre6:81:c9:31:4f:78
After starting the virtual machine on the vncserver to connect to the virtual machine interface operation, we check the IP address, we can find that our DNSMASQ has been automatically assigned IP address to the virtual machine.
Check the NAT rules in the host's firewall:
#iptables-T nat-l postrouting
Chain postrouting (policy ACCEPT)
target prot opt source destination
Masquerade All--10.0.0.0/8 !10.0.0.0/8
Hey, I've emptied the firewall NAT rules, so there's only one rule here. O (∩_∩) o
Network connectivity testing on virtual machines:
At the same time test open grab Bag Oh!
The data packets for the Bridging device network card are as follows:
# tcpdump-i Isolationbr-nn tcpdump:verbose output suppressed USE-VOR-VV for full protocol decode on ISOLATIONBR, Link-type EN10MB (Ethernet) Capture size 65535 bytes 12:05:14.655667 IP 10.0.0.83 >172.16.0.1:icmp Echo Request, id 257, SEQ 0, length 12:05:14.658466 IP 172.16.0.1 >10.0.0.83:icmp Echo reply, id 257, SEQ 0, length 64 1 2:05:15.657273 IP 10.0.0.83 >172.16.0.1:icmp echo request, id 257, seq 1, Length 12:05:15.658252 IP 172.16.0.1 > 10.0.0.83:icmp Echo reply, id 257, seq 1, length 12:05:19.659800 ARP, Request who-has10.0.0.83 tell 10.0.0.254, length 12:05:19.661522 ARP, Request who-has10.0.0.254 tell 10.0.0.83, length 12:05:19.661569 arp, Reply 10.0.0.254 is-at16 : 85:a7:5c:84:9d, Length 12:05:19.662053 arp, Reply 10.0.0.83 is-at52:54:00:88:88:88, Length 12:05:47.759101 Arp, REQ Uest who-has10.0.0.47 tell 10.0.0.83, length 12:05:47.760926 ARP, Reply 10.0.0.47 is-at52:54:00:12:34:56, length 28 12: 05:47.761579 IP 10.0.0.83 >10.0.0.47:icmp echo request, id 513, seq 0, length 12:05:47.765075 IP 10.0.0.47 >10.0.0.83:icmp Echo Reply, ID 513, SEQ 0, length 12:05:48.759703 IP 10.0.0.83 >10.0.0.47:icmp echo request, id 513, seq 1, length 64 12:05:48. 760848 IP 10.0.0.47 >10.0.0.83:icmp Echo reply, id 513, seq 1, length 12:05:52.775287 ARP, Request who-has10.0.0.83 Tell 10.0.0.47, Length 12:05:52.776601 ARP, Reply 10.0.0.83 is-at52:54:00:88:88:88, length 12:05:59.376454 IP 10.0. 0.83 >172.16.31.7:icmp echo request, id 769, seq 0, length 12:05:59.376548 IP 172.16.31.7 >10.0.0.83:icmp Echo
Reply, id 769, seq 0, length 12:06:00.482899 IP 10.0.0.83 >172.16.31.7:icmp echo request, id 769, SEQ 1, Length 64 12:06:00.483035 IP 172.16.31.7 >10.0.0.83:icmp Echo reply, id 769, seq 1, length 12:06:04.376987 ARP, Request who-h as10.0.0.83 tell 10.0.0.254, length 12:06:04.378153 ARP, Reply 10.0.0.83 is-at52:54:00:88:88:88, length 28
The data message of the physical network card is as follows:
# tcpdump-i eth0 icmp-nn
tcpdump:WARNING:eth0:no IPv4 addressassigned tcpdump:verbose
output suppressed, use -VOR-VV for full protocol decode
listening on eth0, Link-type EN10MB (Ethernet), capture size 65535 bytes
4.657680 IP 172.16.31.7 >172.16.0.1:icmp echo request, id 257, SEQ 0, length
12:05:14.658427 IP 172.16.0.1 > 172.16.31.7:icmp Echo reply, id 257, SEQ 0, length
12:05:15.657329 IP 172.16.31.7 >172.16.0.1:icmp Echo reques T, id 257, seq 1, length
12:05:15.658215 IP 172.16.0.1 >172.16.31.7:icmp Echo reply, id 257, SEQ 1, Length 64
So far, the network model of our KVM virtualization platform is complete, and these models are also important for future virtualization of cloud computing platform networks.