OVirt v3.4 began to support hosted Engine. What is hosted engine? Very simply, the previous management node was deployed on a physical machine, and now we deploy the management node to a virtual machine, and the virtual machine runs on the compute nodes in the Ovirt virtualized environment. Previously, the management node was deployed, the compute nodes were deployed, and the compute nodes were registered to the management node, and the virtual machines could be run on the compute nodes through the Web management platform. Now you need to deploy the compute nodes and then the virtual Machine Deployment Management node on the compute nodes. The sequencing has changed.
As mentioned earlier, we need to deploy the management node in a virtual machine, and since it is a virtual machine, we can make a Management node virtual machine template--engine Appliance in advance, install the system in the virtual machine and manage the required packages of the node in advance. When you deploy a Ovirt virtualized environment, you create a virtual machine directly based on the template, and the Engine-setup command is configured in the virtual machine to complete the Management node deployment. This allows us to save time in managing node deployments and enable rapid deployment of virtualized environments.
Engine appliance how to make? The Ovirt Community Ovirt-appliance Project provides us with a production tool. Here's how to make an engine appliance based on the tool:
First, the Environment preparation
Need to prepare a machine that can connect to the Internet (need to turn on CPU virtualization support), memory needs to be greater than 5G, disk free space is greater than 10G, prepare CentOS7 (missing necessary Lorax package in 6) installation media.
Second, the production environment to build
(1) Installing the CENTOS7 system
(2) Install dependent packages (software sources that need to be configured with EPEL7)
# yum-y Install Lorax pykickstart virt-install libguestfs-tools imagefactory oz# yum-y groupinstall Virtualization Host
(3) Set SELinux to permissive mode
# Setenforce 0# sed-i "s/^selinux.*/selinux=permissive/g"/etc/sysconfig/selinux
Third, the production of appliance
(1) Download Ovirt-appliance package, prepare appliance production environment
# cd/tmp/# git clone git://gerrit.ovirt.org/ovirt-appliance# cd ovirt-appliance# git submodule update--init# CD engine-a Ppliance
(2) Making appliance
(2.1) Raw format (output file is Ovirt-appliance-fedora.raw)
# Make Ovirt-appliance-fedora.raw
(2.2) OVA format (output file is Ovirt-appliance-fedora.ova)
# make
Note: At this point we can get the engine appliance file in raw or OVA format, which can be used as the Management node virtual machine template when hosted engine is deployed.
Common Error Resolution:
(1) Import version module error, as follows:
traceback (most recent call last): file "scripts/create_ova.py", line 4, in <module> from imagefactory_plugins.ovfcommon.ovfcommon import RHEVOVFPackage File "/tmp/ovirt-appliance/engine-appliance/imagefactory/ imagefactory_plugins/ovfcommon/ovfcommon.py ", line 28, in <module> from imgfac. persistentimagemanager import persistentimagemanager file "/tmp/ovirt-appliance/ engine-appliance/imagefactory/imgfac/persistentimagemanager.py ", line 17, in <module > from ApplicationConfiguration import ApplicationConfiguration File "/tmp/ovirt-appliance/engine-appliance/imagefactory/imgfac/applicationconfiguration.py", Line 25, in <module> from imgfac. Version import version as versionimporterror: no module named versionmake: *** [ ovirt-appliance-fedora.ova] Error 1
Workaround:
# CP/USR/LIB/PYTHON2.7/SITE-PACKAGES/IMGFAC/VERSION.PY/TMP/OVIRT-APPLIANCE/ENGINE-APPLIANCE/IMAGEFACTORY/IMGFAC /
(2) The QEMU-KVM command was not found, as follows
Kill $ (Cat spawned_pids)/bin/bash: line 1:QEMU-KVM: Command not found Make[1]: Leave directory "/tmp/ovirt-appliance/engine-appliance"
Workaround:
Ln-s/usr/libexec/qemu-kvm/bin/
Engine Appliance Production Tools Analysis:
Through the above production method we can understand, Engine appliance production method is completely automated, if we want to do some custom configuration of engine appliance, how should be implemented?
The workflow of the following tools is described first:
Execute make command-call makefile (define variable, nest second make)-->make-f imgbased/data/images/poor-mans-lmc-centos7.makefile (define variable, Execute Run-install)-->run-install (define variable, call QEMU-KVM command, read KS file, install a system, make a file in qcow2 format)--return to the first made to continue calling Scripts/create _ova.py Script Production OVA file
Relevant file analysis:
(1) MakeFile
main_name ?= ovirt-appliance-centos7 #定义文件名VM_CPUS ?= 2 #定义虚拟机CPU数量VM_RAM ?= 4096 #定义虚拟机RAM大小 (M) vm_disk ?= 8000 #定义虚拟机DISK大小 (m) ova_ram ?= 4096 #定义OVA模板RAM大小 (m) ova_ cpus ?= $ (Vm_cpus) #定义OVA模板CPU数量ARCH := x86_64 #定义架构RELEASEVER := 7 #定义版本PYTHON ?= pythonpath= "$ (PWD)/ imagefactory/" python #定义Python环境变量CURL ?= curl # Defines the Curl tool. Secondary:. phony:$ (main_name) .ks.tpl #定义ks模板文件名称. intermediate: hda.qcow2 #定义虚拟机磁盘文件名all: $ (main_name) .ova echo "$ (main_name)" appliance done%.ks: %.ks.tpl #拷贝ks模板, Generate KS file (modify some KS file information) &NBSP;&NBSP;&NBsp; ksflatten $< > [email protected] sed -i -e "/^[-]/ d" -e "/^text/ d" -e "s/^part .*/part \/ --size $ (Vm_disk) --fstype ext4 --fsoptions discard/" -e "s/^network .*/ network --activate/" -e "s/^%packages.*/%packages --ignoremissing/" -e "/default\.target/ s/^/#/" -e "/run_firstboot/ s/^/#/" -e "/remove authconfig/ s/^/#/" -e "/remove linux-firmware/ s/^/#/" -e "/remove firewalld/ s/^/#/" - e "/^bootloader/ s/bootloader .*/bootloader --location=mbr --timeout=1/" -e "/rawhide/ s/^/#/" -e "/^ reboot/ s/reboot/poweroff/" -e "/^services/ s/sshd/sshd,initial-setup-text/" -e "/^firstboot/ s/$$/ --reconfig/" -e "s#\$ $basearch #$ (ARCH) #g" -e "s#\$ $releasever #$ (releasever) #g" [email protected]%.qcow2: %.ks #制作qcow2格式模板文件 make -f imgbased/data/images/poor-mans-lmc-centos7.makefile \ #执行第二次make kickstart= "$<" releasever=$ (RELEASEVER) qemu_append= "CmdLine $ (qemu_append) " disk_size=$$ (( $ (vm_disk) / 1000 ) g run-install # Execute run-install, make Hda.qcow2 file qemu-img convert -O qcow2 hda.qcow2 "[email protected]" #将hda. qcow2 file to qcow2 format and rename rm -f hda.qcow2 # Delete the Hda.qcow2 file (the file is an intermediate file, which is not required thereafter)%.ova: %.qcow2 #调用scripts/create_ova.py, making an OVA format template file $ (SUDO) $ (PYTHON) scripts/create_ova.py -m $ (Ova_ram) -c $ (Ova_cpus) "$*.qcow2" "[email protected]" Clean: clean-log #清除log echoclean-log: rm -f *.log #删除log文件
(2) Poor-mans-lmc-centos7.makefile
kickstart = kickstarts/runtime-layout.ks #需要再研究下文件是否会被调用DISK_NAME = hda.qcow2 #定义虚拟机磁盘文件名DISK_SIZE = 10g # Define the virtual machine disk size vm_ram = 2048 #定义虚拟机内存VM_SMP &NBSP;=&NBSP;4QEMU&NBSP;=&NBSP;QEMU-KVM #定义QEMU工具QEMU_APPEND = #QEMU附加信息CURL = curl -L -O #CURL命令附加选项CENTOS_RELEASEVER = 7 # Define version centos_anaconda_releasever = 7 #定义Anaconda版本CENTOS_URL = http:/ /192.168.3.239/mirrors/centos/7/os/x86_64/ #定义CentOS dvd Mirror Urlcentos_anaconda_url = $ (centos_url) #定义Anaconda URLifneq ($ (centos_releasever), $ (Centos_anaconda_releasever)) centos_anaconda_url = http://192.168.3.239/mirrors/centos/7/os/x86_64/endifshell = /bin/ Bash. INTErmediate: spawned_pidsvmlinuz: #定义vmlinuz文件位置 $ (CURL) $ (centos_anaconda_url)/isolinux/vmlinuzinitrd.img: # Define INITRD.IMG File Location $ (CURL) $ (centos_anaconda_url)/isolinux/ initrd.imgsquashfs.img: #定义squashfs. img File Location $ (CURL) $ (centos_anaconda_url)/liveos/squashfs.imgdefine treeinfo # Define Treeinfo information [general]name = centos-$ (Centos_releasever) family = centosvariant = centosversion = $ (Centos_releasever) packagedir =arch = x86_64[stage2]mainimage = squashfs.img[images-x86_64]kernel = vmlinuzinitrd = initrd.imgendef.phony: . treeinfoexport treeinfo.treeinfo: echo -e "$$ Treeinfo " > [email protected]run-install: pyport:=$ (shell echo $$ ( 50000 + $ $RANDOM % 15000 )) ) run-install: vmlinuz initrd.img squashfs.img .treeinfo $ ( KICKSTART) python -m simplehttpserver $ (PYPORT) & echo $$! > spawned_pids qemu-img create -f qcow2 $ (Disk_name) $ (disk_size) $ (QEMU) \ #调用qemu-KVM start the virtual machine, automatically install the system and configure it according to the KS file configuration -vnc 0.0.0.0:7 -serial stdio - smp $ (VM_SMP) -m $ (VM_ram) - hda $ (Disk_name) -kernel vmlinuz -initrd initrd.img -append "console=ttys0 inst.repo=$ (centos_url) inst.ks=http://10.0.2.2:$ (Pyport)/$ (KICKSTART) inst.stage2=http://10.0.2.2:$ (pyport)/ quiet $ ( Qemu_append) " ; kill $$ (Cat spawned_pids)
This article is from the "Blackart" blog, make sure to keep this source http://blackart.blog.51cto.com/1142352/1558647
Ovirt Special topic: Hosted Engine engine Appliance production