Mud: KVM and kickstart integration, mud kvmkickstart
This article was sponsored by Xiuyi linfeng and first launched in the dark world.
I would like to explain the integration of KVM and kickstart here, because in the previous article on CentOS unattended installation, I mentioned that if you want to use the NIC PXE function, a DHCP server must exist in the intranet.
That is to say, the internal network must have a DHCP server to obtain the address of the TFTP server. Otherwise, the server cannot be installed unattended and CentOS cannot be installed over the network.
If CentOS is installed on a physical machine over the network, the above conditions must be met, that is. The DHCP server must exist in the Intranet; otherwise, the client Nic cannot obtain the IP address.
However, if we install virtual machines through KVM, we do not need DHCP servers for our network. You only need to configure the IP address in the KVM installation script, and configure the IP address in the configuration file ks. cfg of kickstart.
Why?
In fact, we have introduced the principles of PXE in the article "mud: kickstart unattended installation of CentOS6.5.The installation of CentOS on PXE requires support from IP addresses in two phases, the first stage is to download the kernel installed by the system through TFTP, and the second stage is to obtain the address of the installation source during system installation. If a DHCP server exists in the Intranet, the IP addresses obtained in these two phases are obtained directly from the DHCP server. If we configure a fixed IP address for the NIC, the DHCP server is not required. That is, the IP address used in the first stage is configured in the script of the KVM Virtual Machine installation, and the IP address used in the second stage is configured in the ks. cfg file.
In this case, I would like to add some knowledge about the network. Even if there is no vro in the same network, the two machines can communicate normally as long as the IP addresses configured for the two machines are in the same CIDR block.
In the following article, I will introduce the integration of KVM and kickstart in two parts: the DHCP server exists in the network and the DHCP server does not exist in the network.
I. networks existDHCPServer
Because it is related to KVM integration, you can refer to my previous article "mud: virtualization KVM installation and configuration" for the establishment and configuration of KVM. here we have also used knowledge about LVM and bare devices. You can also refer to these two articles: LVM basics for LVM learning, and mud: KVM uses bare devices to configure virtual machines.
Since this part is related to the DHCP server, we can enable the DHCP server. For the configuration of the DHCP server, refer to the article "mud: installing and configuring the DHCP server in CentOS".
Enable the DHCP server as follows:
/Etc/init. d/dhcpd start
After the DHCP service is enabled, we start to create an LV logical volume as the virtual machine's hard disk. As follows:
Lvcreate-L 20G-n kickstart vg1
Lvs
After the Virtual Machine hard disk is created, run the following command to create a virtual machine:
Virt-install-n kickstart-r 2048 -- vcpus = 1 -- OS-type = linux-l nfs: 192.168.1.11: /iso-f/dev/vg1/kickstart -- bridge = br0-m 52: 54: 00: 12: D7: 5D -- nographics-x "console = ttyS0 ks = nfs: 192.168.1.11:/ks. cfg"
This command creates a virtual machine named kickstart, which allocates 2 GB of memory and uses a CPU. The virtual machine is a linux OS, the hard disk of the virtual machine is/dev/vg1/kickstart, the physical Nic br0 that the virtual machine Nic bridges, and the MAC address specified for the NIC is52: 54: 00: 12: D7: 5D. The image source used to install the virtual machine is nfs: 192.168.1.11:/iso. KVM does not use the graphic interface when installing the virtual machine. When the virtual machine is installed, the ks configuration file is located at nfs: 192.168.1.11:/ks. cfg. After the virtual machine is installed, you can connect to the virtual machine through the serial port.
Here we will introduce the main parameters of this command:
1, Installation source and KS source location
Here we use NFS, and of course we can also use FTP and HTTP. For how to use NFS in KVM, see the help documentation of virt-install. As follows:
Virt-install -- help
You can also use man virt-install as follows:
2, Virtual machine Hard Disk
We use bare/dev/vg1/kickstart for virtual hard disks. For more information about virtual machine hard disks, see the help documentation.
Virt-install -- help
Man virt-install
3, MAC address
We have specified a MAC address for the VM,The MAC address must be in uppercase.. Otherwise, KVM reports an error. In addition, we can only view the specified MAC address through man virt-install. As follows:
Man virt-install
4Ks source configuration
The ks source of this virtual machine is connected through nfs. You can also view man virt-install as follows:
The above are the parameters related to the command to create a virtual machine. Why do we need to introduce these parameters so clearly? These parameters are required when the VM is installed. In this way, we can better understand the KVM parameters and how to view the KVM help documentation.
The above describes how to create a VM in KVM,Remember, we haven't officially created a VM yet.. Now we need to configure the ks configuration file ks. cfg. As follows:
More ks. cfg
Here we mainly configure the network section in ks. cfg. When setting up Virtual Machine installation, use DHCP to obtain the IP address.
After this configuration is complete, we will officially install the KVM virtual machine, as shown below:
After the system is installed, check the IP address and MAC address of the virtual machine. As follows:
Ifconfig
We can see that the virtual machine has indeed obtained the IP address, and the MAC address is indeed specified.
The above is about the existence of DHCP servers on the network. Next we will explain the absence of DHCP servers.
2. The network does not exist.DHCPServer
First, disable the DHCP server, as shown below:
/Etc/init. d/dhcpd stop
Modify the network configuration of ks. cfg in kickstart configuration as follows:
In the ks. cfg file, we define the Virtual Machine IP address, subnet mask, default gateway, DNS server, and host name. As follows:
IP: 192.168.1.220 DNS: 192.168.1.1 hostname: ilanni
After the modification is complete, create a VM through KVM and run the following command:
Virt-install-n kickstart-r 2048 -- vcpus = 1 -- OS-type = linux-l nfs: 192.168.1.11: /iso-f/dev/vg1/kickstart -- bridge = br0-m 52: 54: 00: 12: D7: 5D -- nographics-x "console = ttyS0 -- device = eth0 ip = 192.168.1.220 netmask = 255.255.255.0 gateway = 192.168.1.1 ks = nfs: 192.168.1.11:/ks. cfg ksdevice = eth0"
Note: The IP addresses defined in KVM can be the same or different from those defined in the ks. cfg file, but the two IP addresses must be in the same CIDR block. The IP address of the VM is subject to the IP address configured in the ks. cfg file.
From the above two images, we can see that when there is no DHCP server, we can configure the IP address and ks. cfg when installing the VM for KVM, or install the CentOS system through the network.
After the system is installed, restart the system to enter the Virtual Machine and view the network configurations and host names we have defined. As follows:
Ifconfig
More/etc/resolv. conf
We can see that the network configuration and Host Name of the VM are configured through the ks. cfg file.
Now our experiment is over, and I reiterate it here.
In KVM, the network CentOS system does not necessarily require a DHCP server. However, if the CentOS system is installed on a physical machine, DHCP must be supported. The installation of CentOS on both KVM and physical machines requires the support of the TFTP server.
How are linux resources managed? Fast to be detailed
Resource Manager: to ensure proper resources are allocated to jobs, a database must be maintained for cluster resource management. This database records the attributes and status of various resources in the cluster system, all user-submitted requests, and running jobs. The Policy Manager generates a priority list based on the data and the specified scheduling policy. The Resource Manager schedules jobs based on the priority list. The resource manager should also be able to reserve resources. In this way, not only powerful resources can be reserved for required jobs, but also redundant resources can be reserved to cope with node failures and sudden computing in the cluster.
Job Scheduling Policy Manager: Based on the Resource Manager, the policy manager obtains the resource status of each node and the system job information to generate a priority list. This list tells the resource manager when to run the job on which nodes. Policy Manager not only provides a complex set of parameters to define the computing environment and jobs, but also provides a simple and flexible expression for this definition to allow the system administrator to implement policy-driven resource scheduling.
2. Job Management Software in the Beowulf Cluster
There are many options to manage resources in the cluster system. PBS Resource Manager and Maui Job scheduler are most suitable for cluster systems.
2.1 PBS
PBS (Portable Batch System) is a flexible Batch processing System developed by NASA. It is used in Cluster Systems, supercomputers, and large-scale parallel systems. PBS has the following features:
Ease of use: provides unified interfaces for all resources and is easy to configure to meet the needs of different systems. Flexible job schedulers allow different systems to adopt their own scheduling policies.
Portability: complies with POSIX 1003.2 standards and can be used in shell, batch processing, and other environments.
Adaptability: it can adapt to various management policies and provide scalable authentication and security models. Supports Dynamic Distribution of loads on the Wide Area Network and virtual organizations built on multiple physical entities in different locations.
Flexibility: supports interaction and batch processing jobs.
OpenPBS (www.OpenPBS.org/) is the implementation of Open Source of PBS. For commercial PBS, see www.pbspro.com /.
2.2 Maui
Maui is an advanced Job scheduler. It uses active scheduling policies to optimize resource utilization and reduce job response time. Maui's resource and load management allows advanced parameter configurations: Job Priority, Scheduling and Allocation, Fairness and fair share) and Reservation Policy ). Maui's QoS mechanism allows direct transfer of resources and services, Policy Exemption, and restricted access to specified features. Maui uses an advanced Resource Reservation architecture to precisely control when, where, WHO, and how resources are used. The reserved Maui architecture fully supports non-intrusive metadata scheduling.
Maui is designed thanks to the experience of the world's largest high-performance computing center. Maui itself also provides test tools and Simulators for estimating and tuning system performance.
Maui needs the resource manager to work with it. We can think of Maui as an insert part in PBS.
For more Maui information, visit www.supercluster.org.
3. Cluster System Management
From the perspective of system composition, the cluster system is... the remaining full text>