Xen Bridging Network

Source: Internet
Author: User
Tags iptables virtual environment xen hypervisor

Transferred from http://www.chenyajun.com/2009/03/06/2408

http://wiki.xensource.com/xenwiki/XenArchitecture?action=AttachFile&do=get&target=Xen+Architecture_Q1+ 2008.pdf

A Xen virtualization environment consists of a set of projects that work together to provide a virtualized environment:
Xen hypervisor;
DOM0;
Domain management and control, management of domains;
Domu PV Client;
Domu HVM Client.

The relationships between them see:
http://www.chenyajun.com/2009/03/01/xen-virtualization-model-explored/

Hypervisor is a software abstraction above the operating system hardware below. Responsible for CPU scheduling, virtual machine memory allocation. Hypervisor not only abstract hardware for virtual machines, but also control the execution of virtual machines. it knows nothing about networks, storage devices, and other IO functions.

Dom0
is a modified Linux kernel that is a unique virtual machine running on top of Xen hypervisor and has special rights to access physical IO and other domu virtual machines. The Xen virtual environment requires DOM0 to be started before any other virtual machines.
The drivers included in DOM0 are used to support DOMU network and local disk requests, including network backend Driver and the block backend Driver.

Domu
All semi-virtualized clients running on Xen hypervisor are called domain U PV clients and can run modified LINUX,SOLARIS,FREEBSD and other operating systems. All fully virtualized clients running on Xen hypervisor are called Domain U HVM clients, running standard Windows or other unmodified operating systems.
The Domu PV client realizes that it does not directly access hardware and realizes that other virtual machines are running on the same machine. Domu HVM Guest does not realize that it is sharing hardware resources with other virtual machines.
A domu PV client contains 2 drivers for network and disk access, PV network Driver and PV block Driver.

A domu HVM client does not have that kind of PV drive; instead, there is a special daemon for each HVM under Dom0: QEMU-DM. It supports DOMU HVM clients for network and disk access requests.
The Domu HVM client must perform some initialization and load some software modules (firmware) to simulate the BIOS.

Management and control of DOM
A series of Linux processes are categorized as DOM management and control tools that are used to manage and control virtual machines within DOM 0.

Xend
Xend is a Python application that is a System manager for Xen environments. It adjusts the Libxenctrl to the hypervisor request, all through the xend request through the XM tool uses the XML-RPC interface to initiate.

Xm
Command-line tool that receives user input through XML RPC to xend.

Xenstored
This maintains information such as memory and event channels between DOM0 and Domu.

Libxenctrl
A C library provides the ability to xend and Xen hypervisor through Dom0 dialogues. In Dom0, Privcmd distributes requests to hypervisor.

Qemu-dm
Each HVM client requires its own Qemu daemon. This tool handles all network and disk requests from Domu HVM fully virtualized clients. Qemu must exist outside of the Xen hypervisor because it must be able to access the network and IO in DOM0.

Xen Virtual Firmware
A virtual BIOS is plugged into each domu HVM client to ensure that the operating system receives various standard boot instructions during the normal boot process.

Communication between Dom0 and Domu
Xen hypervisor is not used to support network or disk requests, so a domu communicates to DOM0 via hypervisor for disk or network requests.
When a domu PV client block device driver receives a disk write request, it writes the data to and dom0 a shared piece of local memory through the Xen hypervisor. There is an event channel between the DOM0 and Domu PV clients that allows them to communicate by using Xen hypervisor asynchronous domain interrupts. DOM0 will receive an interrupt from hypervisor causing the PV block backend Driver to access the local system memory to read the blocks in the shared memory of the Domu PV client. The data in the shared memory is then written to the local disk.

As described previously, the back-end driver (backend driver) runs in the privileged domain, and the front-end driver (frontend driver) runs in the unprivileged domain. Non-privileged clients emit device requests to the front-end driver. The front-end driver then communicates with the back-end driver running in the privileged domain. Privileged domains queue requests to actual physical hardware requests.

Front-End and back-end-driven traffic is accomplished by using Xenbus system memory, a loop buffer that is used to share event channels and producers/consumers. To avoid expensive copies of data, Xenbus is accomplished by simple mapping. For example, when you want to write data to disk or send data over the network, a cache that belongs to a non privileged domain may be mapped to a privileged domain. Similarly, to read data from the disk or to receive data from the network, a cache controlled by a privileged domain can be mapped to an unprivileged domain.

This communication is established by Xenstore. When the current end driver starts, it uses Xenstore to establish a shared memory pool and an event channel that communicates with the backend driver. After the connection is established, the front-end and back-end places the request or response in shared memory and sends notifications to each other through the event channel. Xenstore the back end and front-end drivers provide visibility into this connection.

Bridging, routing, and NAT
Xen provides 3 of virtual network models for client access to physical devices-bridging, routing, and NAT. In bridging mode, the virtual network interface (VIF) is visible in the external LAN, and in the routing model, vif is invisible to the external LAN, but the IP is visible. In the NAT model, vif is not visible on the external LAN, nor does it have an externally visible IP address.

In bridging mode, the Brctl tool is used to create software-style bridging interfaces, and a physical network interface is then attached to the bridge. The back-end vif of the Xen client domain can be attached to this bridge. When a bridge interface receives a package from a physical interface, the physical network interface forwards them to a different domain based on the MAC address of each domain's virtual network card.

The iptables mechanism is used for routing under the routing model. All packets received by the physical interface are processed by the network IP layer of the drive domain. The drive domain (DOM0) locates the routing table entry and forwards the packet to a different client IP address. In routing mode, the drive domain connects 2 different network segments: The network segment used internally by the client and the network segment that connects to the external network.

When the drive domain acts as a NAT gateway, the drive domain still acts as a router, but further maps its own IP address and port to a client's IP address and port. The client's IP address is hidden behind the drive domain and is not visible to the external network.

Linux firewalls provide iptables, while Bridge-utils provides etables for basic MAC address filtering. You can also specify a physical network card to use for a domain.

The following figure is an example of a bridge.

Veth0, vif0.0 is the DOM0 network interface. Veth0 is renamed to Eth0. The Xenbr0 interface is a soft bridging interface. Vif1.0 is the back-end network interface of the running client.
Peth0, Xenbr0, vif0.0, vif1.0 share the same MAC address FE:FF:FF:FF:FF:FF, which is the Ethernet broadcast address. This means that the actual network interface, the DOM0 loopback interface, and the client's back-end interface are broadcast to the XENBR0. When the physical network card receives the packet, it is sent directly to the bridging interface Xenbr0, which determines the backend interface of the domain to which the packet is forwarded through the packet's MAC address. Therefore Peth0 does not need IP, only need MAC address. The original IP of the physical interface has been told to the virtual front-end interface of the eth0--driver domain. XENBR0 determines whether packets are forwarded to eth0 or vif1.0 via a MAC address, either 00:11:25:f6:15:22 or 00:16:3e:45:e7:12. The corresponding front-end interface for the client domain is named eth0. From the dom0 point of view, the eth0 in the client is actually vif1.0.
Display of the BRCTL command:

[user@dom0]# brctl Show Bridge
name Bridge     ID               STP enabled    interfaces
xenbr0          8000. Feffffffffff       no             vif1.0
                                                       peth0
                                                       vif0.0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.