Virtual Server Cluster construction and Virtual Machine migration

Source: Internet
Author: User
Hardware: DellPowerEdgeR2004 DellPowerEdgeT1001 software: effectuserver10.04, Xen4.01, GhostForLinux cluster architecture: MainNode: No.1BladeServerOS: U

Hardware: 4 Dell PowerEdge R200

Dell PowerEdge T100 1 unit

Software: Ubuntu Server 10.04, Xen 4.01, Ghost For Linux

Cluster architecture:

MainNode: No.1 Blade Server

OS: Ubuntu Server 10.04

Outer IP: xxx. xxx

Inner IP: 192.168.100.1

Node1: No. 2 Blade Server

OS: Ubuntu Server 10.04 + Xen 4.01

Inner IP: 192.168.100.2

Node2: No. 3 Blade Server

OS: Ubuntu Server 10.04 + Xen 4.01

Inner IP: 192.168.100.3

Node3: No. 4 Blade Server

OS: Ubuntu Server 10.04 + Xen 4.01

Inner IP: 192.168.100.4

SharedStorageCentre: Tower Server

OS: Ubuntu Server 10.04

Inner IP: 192.168.100.5

First, install and configure the system environment of a server, and then use G4L (Ghost For linux) to directly push the installed system to other machines. Because I use the Xen virtualization environment, therefore, the underlying hardware must be the same when pushing the system. In non-virtualized environments, the requirement for hardware consistency needs to be verified by readers.

Note:

1. Here, the R200 and T100 must be installed separately and cannot be pushed directly;

2. Install ssh before pushing the system. In the network topology I designed, ssh is required for Inter-node access;

3. When the Xen virtualization environment is installed on the Dell PowerEdge, "* Speech-dispatcher configured for user sessions" appears after the installation is started. The graphic interface should be disabled. $ Vi/etc/X11/default-display-manager: change the content to false. At this point, after the restart, it will be stuck in the "Checking battery state". At this time, you have successfully started Xen, and you only need to use other terminals to log on, such as Alt + F2.

4. Use of G4L will be given in the following article.

On each G4L push system, you need to make the following three changes:

1. Change the host name for future use. The procedure is as follows:

1. Enable the root user

Run the sudo su command.

2. Log On As A root user

1) edit the file/etc/hosts and set the following line

127.0.1.1 xxxxx

Replace

127.0.1.1 newhostname

2) edit the/etc/hostname file to delete all contents of the file and add newhostname

3) run the hostname newhostname command.

3. Quit the root user and use the normal user to log on.

Note: xxxxx indicates the original host name newhostname you want to modify.

2. Change the IP address of each node. Because the system of each node is copied by one, the default first Nic is not eth0. For example, my machine is a dual-network card. After pushing the system, input ifconfig to display eth2 and eth3, but eth0 and eth1 are missing. Then, edit/etc/udev/rules. d/70-persistent-net.rules file, found that eth0-eth3 information exist, to do is to delete eth0, eth1, and eth2, eth3 to eth1, eth2, restart.

3. the push and installation system may still fail over the network. Do not worry. Enter route. If you see two default items, delete the corresponding peth items. Run the sudo route del default dev peth0 command. The reason is that the machine route table encounters two default values and does not know which data packet to send.

In my designed network topology environment, internal nodes cannot access the Internet. Therefore, perform the following operations on the master node to use the master node dual Nic for routing:

Iptables-a postrouting-t nat-s 192.168.100.0/24-o eth1-j SNAT -- to-source xxx. xxx (Outer IP)

Echo 1>/proc/sys/net/ipv4/ip_forward

In this step, the system environment of your cluster is basically set up. You need to set up shared storage for migration of virtual machines. Virtual Machine migration involves storage restoration migration and dynamic migration. I use dynamic migration.

First, add the IP addresses and node names of all nodes in/etc/hosts, so that you can use the node names to access each node.

192.168.100.1 MainNode

192.168.100.2 Node1

192.168.100.3 Node2

192.168.100.4 Node3

192.168.100.5 SharedStorageCentre

SharedStorageCentre:

1. Run sudo apt-get install nfs-kernel-server.

2. sudo mkdir/xen-storage

3. Edit the/etc/exports file and add the following line to export the storage directory:

#/Xen-storage * (rw, sync, no_root_squash)

Node1-Node3:

1. sudo mkdir/xen-storage

2. Run the temporary mount command sudo mount SharedStorageCentre:/xen-storage.

3. Add SharedStorageCentre:/xen-storage nfs ults 0 0 in/etc/fstab to enable automatic mounting after restart.

4. You must modify the default Xend configuration file/etc/xen/xend-config.sxp to change the location where the Domain is running. In addition, to make the modification take effect on the host, you must restart the host where the Xen server is located. Modify the following configurations:

Xend-relocation-server: this flag is used to enable/cancel server migration. By default, this flag is set to no, that is, the server location cannot be changed. During the migration, the Domain virtual memory will become the original form without any encryption. Therefore, be careful when enabling this option on untrusted networks.

Xend-relocation-port: The Xend daemon migrates data through this port. The default value of this port is 8002.

In addition, there are two parameters that do not need to be modified. when migrating data in an enterprise deployment environment, pay attention to the following:

Xend-relocation-address: this flag limits Domain migration to a specific interface. The specified address is the interface address for listening to the local connection used for Domain migration. This flag is used only when the Xend-relocation-server parameter is enabled. All interfaces are allowed by default.

Xend-relocation-hosts-allow: this flag defines the host that allows communication with the migration port. The value is a sequence of Regular Expressions separated by spaces. If the value is null, all connections are allowed. Otherwise, the value must match either an IP address or a complete domain name.

5. create multiple Xen domains running on NFS server memory and start their respective client domains on the Node1-Node3. the specific method is to create and run the virtual machine on the Node1-Node3, its path is/xen-storage

6. dynamic migration command:

# Xm migrate-live DomName destinyIP

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.