OpenStack deployment Summary: On CentOS 6.5, use RDO to install the icehouse (ovs + vlan) and rdoicehouse on the dual settlement Node
This article describes how to install a dual-computing node icehouse environment through RDO on CentOS6.5. The installation process involves a lot of software and complex dependencies, we recommend that you use a new operating system for installation.
Hardware environment
Two Dell PCs, each with two NICs, each with a network cable. Em1 for public network and management network, em2 for Virtual Machine Network
Server |
Public Network/Management Network |
VM Net |
Role |
Server 1 |
Em1: 192.168.40.147 |
Em2 |
Control nodes, network nodes, storage nodes, and computing nodes |
Server 2 |
Em2: 192.168.40.148 |
Em2 |
Computing node |
Because the vlan mode is used, you need to physically set the two ports connected to em2 to Trunk ,.
A simple physical description is shown below.
A simple logic diagram is taken from the diagram of Mr Chen's blog (ignore IP addresses and device names)
Communication between virtual machines is through eth1 (corresponding to my environment is em2). virtual machines access external networks through the L3agent on Server 1, br-ex to eth0 (corresponding to my environment is em1) to access the external network
Install the Operating System
Use a CD or image file to install the operating system.
When partitioning, You need to divide a logical zone and create a group named cinder-volumes. This logical volume group will be used by cinder.
The created results are similar:
For the creation process see: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-lvm-diskdruid-manual.html
You can also create an ECS instance by running commands after installing the operating system.
Modify/etc/fstab to remove the boot mount of cinder-volume, otherwise you will not be able to restart
Sed-I '/cinder-volume/s/^/#/'/etc/fstab
Configure the network
The Configuration Methods on both machines are the same except for IP and MAC.
Configure vi/etc/sysconfig/network-scripts/ifcfg-em1 and edit as follows
DEVICE=em1 HWADDR=F8:B1:56:AE:3A:84 TYPE=Ethernet UUID=6f49b547-f1f8-4b21-a0fc-68791a5237dd ONBOOT=yes BOOTPROTO=static IPADDR=192.168.40.145 NETMASK=255.255.255.0 GATEWAY=192.168.40.1 DNS1=8.8.8.8
Configure vi/etc/sysconfig/network-scripts/ifcfg-em2 and edit as follows
DEVICE=em2HWADDR=00:21:27:AE:16:A3TYPE=EthernetUUID=9c5983f2-1932-4540-953f-7774a2aa5154ONBOOT=yesBOOTPROTO=staticIPADDR=192.168.105.3GATEWAY=192.168.105.1NETMASK=255.255.255.0DEFROUTE=no
After the above configuration, confirm that the network can be connected.
The NetworkManager service can be stopped when the instance is running, so that useless services of this service will occupy a large amount of memory.
chkconfig NetworkManager off
Install related yum sources
Whether the yum source is correctly installed directly affects the correctness of the installation. Many problems encountered during the installation are related to the source.
The installation process involves three sources:
Install the 163 Source
1, backup/etc/yum. repos. d/CentOS-Base.repo
- Mv/etc/yum. repos. d/CentOS-Base.repo/etc/yum. repos. d/CentOS-Base.repo.backup
2. Download the repo file of the corresponding version and put it in/etc/yum. repos. d/(back up the file before the operation)
Install icehouse Source
Run the following command:
- Yum install-y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
Install epel Source
Select a 64-bit system and run the following command:
- Rpm-ivh http://download.fedora.redhat.com/pub/epel/6/x86_64/epel-release-6-5.noarch.rpm
The foreman. repo source is generated when the above source is installed. This source can be deleted directly.
Set YUM cache
Due to network problems, the installation process may often fail, so you can set yum cache to improve the efficiency of re-installation after failure.
Install openstack-packstack
Run the following command:
Yum install-y openstack-packstack
Some programs may always fail to be installed on the computing node. you can install the program on the computing node and execute the program again on the control node.
Configure and modify the packstack configuration file
To facilitate the subsequent execution of the same packstack configuration multiple times, you can first export an original configuration and make some modifications to this file. Then you can specify a configuration file to install openstack.
Generate your own configuration file
Packstack --gen-answer-file=vlan_2compute.txt
Modify configuration file
To verify the new features such as heat, lbaas, and ceilometer in icehouse, you need to install these components and make some adjustments to the network configuration.
Overwrite the configuration in vlan_2compute.txt.
# To use HEAT CONFIG_HEAT_INSTALL = y CONFIG_NTP_SERVERS = 0.uk.pool.ntp.org in the environment # configure two computing nodes: CONFIG_COMPUTE_HOSTS = 192.168.40.147, 192.168.40.148 # login Console Password CONFIG_KEYSTONE_ADMIN_PW = admin # the previous step has created LVM environment = n # network configuration environment = em2 environment = em1 environment = em2 CONFIG_LBAAS_INSTALL = y environment = y # configuration related to VLAN mode CONFIG_NEUTRON_ML2_TYPE_DRIVERS = vlan CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES = vlan CONFIG_NEUTRON_ML2_VLAN_RANGES = physnet1: 10: 20 labels = vlan labels = physnet1: 10: 20 labels = physnet1: br-em2 labels = br-em2: em2 # do not install DEMO user CONFIG_PROVISION_DEMO = n CONFIG_HEAT_CLOUDWATCH_INSTALL = y CONFIG_HEAT_CFN_INSTALL = y
Modify Selinux Configuration
Edit/etc/selinux/config and set the following attributes:
SELINUX = permissive
Modify hosts
Add the IP address and name of the other node in the/etc/hosts file.
Execute the configuration file
Packstack --answer-file=myanswer.txt
This process may be long and may be interrupted several times in the middle. Please continue after the interruption
Login
The username and password are admin.
After successful OVS Configuration:
Control Node
[root@icehouse ~]#ovs-vsctl showfb04ef1e-278f-48d4-b20b-3eafb63de9cf Bridge br-ex Port br-ex Interface br-ex type: internal Port "em1" Interface "em1" Port "qg-ea25d142-ea" Interface"qg-ea25d142-ea" type: internal Bridge "br-em2" Port "phy-br-em2" Interface "phy-br-em2" Port "em2" Interface "em2" Port "br-em2" Interface "br-em2" type: internal Bridge br-int Port "tapc07b9126-81" tag: 2 Interface"tapc07b9126-81" type: internal Port "qvo6889c1b9-fb" tag: 1 Interface"qvo6889c1b9-fb" Port "qvoe26e3b19-a4" tag: 1 Interface"qvoe26e3b19-a4" Port "qvo8e422661-97" tag: 1 Interface"qvo8e422661-97" Port "qr-9d77d069-84" tag: 1 Interface"qr-9d77d069-84" type: internal Port "tap89c353d7-f6" tag: 1 Interface"tap89c353d7-f6" type: internal Port br-int Interface br-int type: internal Port "int-br-em2" Interface "int-br-em2" ovs_version: "1.11.0"
Computing node:
[root@icehouse1 ~]#ovs-vsctl show63be159a-193e-48d6-b472-4851d8c58af7 Bridge br-int Port "qvoa7274e42-7b" tag: 1 Interface"qvoa7274e42-7b" Port "int-br-em2" Interface "int-br-em2" Port "qvo6dfc5f97-c5" tag: 1 Interface"qvo6dfc5f97-c5" Port br-int Interface br-int type: internal Bridge "br-em2" Port "br-em2" Interface "br-em2" type: internal Port "em2" Interface "em2" Port "phy-br-em2" Interface "phy-br-em2" ovs_version: "1.11.0"