Opennubula has two parts: front-end and cluster-node. Front-end is used to install opennebula to monitor virtual machines installed on cluster-node, Virtual Machine images, and cluster-node statuses. Cluster-node installs the Virtual Machine and runs the Virtual Machine (KVM, xen, VMWare, this article uses KVM ). The relationship between front-end and cluster-node is one-to-multiple.
Unless otherwise specified in this Article, front-end is used.
This article uses opennebula version 2.2.0
1. Install the dependency package
# Cd/etc/yum. Repos. d/
# Wget http://centos.karan.org/kbsingh-CentOS-Extras.repo
# Wget http://centos.karan.org/kbsingh-CentOS-Misc.repo
Edit the kbsingh-CentOS-Extras.repo file and open its Test Library
The file is as follows:
[KBS-centos-testing]
Name = centos. Karan. org-El $ releasever-Testing
Gpgcheck = 0
Gpgkey = http://centos.karan.org/RPM-GPG-KEY-karan.org.txt
Enabled = 1 (change this option from 0 to 1)
Baseurl = http://centos.karan.org/el?releasever/extras/testing/?basearch/rpms/
Update Library:
# Yum clean all
# Yum makecache
# Yum-y install glibc-common glibc-devel CPP glibc-headers kernel-headers libgomp libstdc ++-devel nscd gcc-C ++ rpm-build Yum-utils pkgconfig
# Yum-y install libxml2 libxml2-devel expat-devel libxslt-devel OpenSSL-devel curl-devel
# Yum-y install Ruby ruby-libs ruby-devel ruby-IRB ruby-docs ruby-rdoc ruby-ri rubygems cmake
2. Compile the sqlite3 source code (version 3.6.17, no other version has been tried)
# Wget http://www.sqlite.org/sqlite-amalgamation-3.6.17.tar.gz
# Tar xvzf/tmp/sqlite-amalgamation-3.6.17.tar.gz
# Cd sqlite-3.6.17/
#./Configure
# Make
# Make install
3. Install XMLRPC-C (if XMLRPC * has been installed through yum, uninstall it first)
# Wget http://centos.karan.org/el5/extras/testing/SRPMS/xmlrpc-c-1.06.18-1.el5.kb.src.rpm
# Rpmbuild -- rebuild xmlrpc-c-1.06.18-1.el5.kb.src.rpm
# Yum-y -- nogpgcheck localinstall/usr/src/RedHat/RPMS/x86_64/XMLRPC-C-*. rpm
4. Install scons
# Wget http://prdownloads.sourceforge.net/scons/scons-2.0.1-1.noarch.rpm
# Yum-y -- nogpgcheck localinstall scons-2.0.1-1.noarch.rpm
5. Install Ruby (used in subsequent steps. Yum. Only one more)
# Wget ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p0.tar.gz
# Tar zxvf ruby-1.9.1-p0.tar.gz
# Cd ruby-1.9.1-p0
#./Configure
# Make
# Make install
6. Install gems
# Gem install nokogiri rake xmlparser
The official website requires the establishment of a oneadmin account. Unfortunately, a series of permission problems will occur in the future. We recommend that you directly use root for installation.
7. Install opennebula
# Mkdir-P/srv/cloud/One
# Mkidr-P/srv/cloud/Images
Download the latest opennebula version, decompress it, and CD it to the source code directory.
# Scons
#./Install. Sh-u root-g root-D/srv/cloud/One
8. Configure the environment
Vim ~ /. Bashrc, add at the end
Export one_location =/srv/cloud/One
Export one_auth = $ home/. One/one_auth
Export one_xmlrpc = http: // localhost: 2633/rpc2
Export Path =/srv/cloud/One/bin: $ path
After modification, source ~ /. Make the configuration take effect
9. Add an opennebula user
# Mkdir ~ /. One
Vim ~ /. One/one_auth, add the User name: Password, for example
Cloud: cloudpassword
10. Modify the opennebula configuration file
# Vi/srv/cloud/One/etc/ONED. conf set scripts_remote_dir =/srv/cloud/One/var
11. Start opennebula
One start
12. Start the NFS service
Allow cluster-node mounting/srv/cloud
# Yum install NFS
# Vim/etc/exports, add the following at the end of the file:
/Srv/cloud * (RW, sync, no_root_squash)
# Exportfs-
#/Etc/rc. d/init. d/nfs restart
13. Install KVM on cluster-node
# Yum groupinstall KVM
# Reboot
(You can use lsmod | grep KVM to check whether the KVM module is enabled)
14. Install Ruby on cluster-node (used in subsequent steps. Yum. Only one more)
# Wget ftp://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.1-p0.tar.gz
# Tar zxvf ruby-1.9.1-p0.tar.gz
# Cd ruby-1.9.1-p0
#./Configure
# Make
# Make install
15. mount NFS on cluster-node
# Mount-t nfs front-end:/srv/cloud (replace front-end with the domain name or IP address of the front-end server)
Automatic mounting upon startup
# Add VI/etc/fstab to the end of the file
Front-end:/srv/cloud NFS defaults 0 0
16. Create a bridge on cluster-node
To enable the front-end host to connect to the VM, a bridge must be established on the cluster-node.
# Brctl addbr virbr0 (if this bridge is set up during KVM installation, the system will prompt that the bridge already exists)
I # fconfig eth0 0.0.0.0 up
# Brctl addif virbr0 eth0
# Ifconfig virbr0 <old_eth0_ip_address>
(These operations should be run before the cluster-node host and will be dropped by remote connection)
At this point, all configurations are complete.