http://blog.lofyer.org/6-2-2-cloud-ha-ovirt/
12. Build High-availability Ovirt (hosted engine)
I wrote this article when Ovirt has been updated to 3.4.
In this case, we will use the previously created distributed-replicate storage, so that the high availability of the system services can be guaranteed to be increased.
Here are a few things to note:
1. The host CPU architecture needs to be Westmere (Westmere e56xx/l56xx/x56xx), Nehalem (Intel Core i7 9xx), Penryn (Intel Core 2 Duo P9XXX) or Conroe (Intel celeron_4x0), otherwise the cluster type is incompatible with the host type causing the datacenter to fail to start.
CPU Family table See
Intel Architecture and Processor identification with CPUID Model and Family Numbers
2. It is recommended to refer to section 11th to install the virtual machine with Ovirt management in advance, and the hard disk format is raw, so as to import or overwrite the virtual disk as OVF when installing the management machine, reduce the risk of failure time. Prepare
On each machine, you add the FQDN of the engine running as the virtual machine, which is ha.lofyer.org.
1 |
# echo-e ' 192.168.10.100\tha.lofyer.org ' >> /etc/hosts |
The store can use the previous glusterfs, as Nfs_v3, and note that the brick permission is set to VDSM.KVM or 36:36.
1 |
# gluster Volume Create gluster-vol1 replica 2 gs1.example.com:/gluster_brick0 gs2.example.com:/gluster_brick0 Gs3.example.com:/gluster_brick0 gs4.example.com:/gluster_brick0 Gs1.example.com:/gluster_brick1 gs2.example.com:/ Gluster_brick1 Gs3.example.com:/gluster_brick1 gs4.example.com:/gluster_brick1 Force |
Because the engine and node's network services depend on networks rather than networkmanager, we need to enable the former to disable the latter, and on each server, make the following configuration modification network. /etc/sysconfig/network-scripts/ifcfg-eth0