Preparations for building an OpenStack cloud system on CentOS7
The system uses centos7.1 for minimal installation and the language is English. For more information about installation, see Centos7.1 system installation graphic explanation. After installation is complete, configure the network.
Use OpenStack Network (neutron)
Controller node configuration Network:
1. The first interface is configured as the management interface.
IP address: 10.0.0.11
Network mask: 255.255.255.0 (or/24)
Default gateway: 10.0.0.1
Restart NETWORK SERVICE
# Service network restart
Configuration name resolution:
Set the Host Name of the controller node
# Hostnamectl set-hostname controller
2. Edit the/etc/hosts file to include the following
# Controller
10.0.0.11 controller
# Network
10.0.0.21 network
# Compute
10.0.0.31 compute
Network node configuration Network:
1. The first interface is configured as a management interface.
IP address: 10.0.0.21
Network mask: 255.255.255.0 (or/24)
Default gateway: 10.0.0.1
2. Configure the second interface as the instance tunnel interface.
IP address: 10.0.1.21
Network mask: 255.255.255.0 (or/24)
3. An external interface uses a special configuration and does not assign an IP address to it.
The third interface is configured as an external interface.
Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
Edit the/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to include the following (but do not change the HWADDR and UUID keys ):
DEVICE = INTERFACE_NAME
TYPE = Ethernet
ONBOOT = "yes"
BOOTPROTO = "none"
4. Restart the network service.
# Service network restart
Configuration name resolution:
Set the Host Name of a Network Node
# Hostnamectl set-hostname network
2. Edit the/etc/hosts file to include the following
# Network
10.0.0.21 network
# Controller
10.0.0.11 controller
# Compute
10.0.0.31 compute
Compute node configuration Network
1. The first interface is configured as the management interface.
IP address: 10.0.0.31
Network mask: 255.255.255.0 (or/24)
Default gateway: 10.0.0.1
Note:
The additional computing nodes should use 10.0.0.32 10.0.0.33 and so on.
2. Configure the second interface instance tunnel interface IP address: 10.0.1.31
Network mask: 255.255.255.0 (or/24)
3. restart the system
# Reboot
Configuration name resolution:
1. Set compute1 to the Host Name of the node.
2. Edit the/etc/hosts file to include the following
# Compute1
10.0.0.31 compute1
# Controller
10.0.0.11 controller
# Network
10.0.0.21 network
Verify the connection. We recommend that you verify the network and Internet connections between nodes.
1. ping Baidu www.centos9.com from the controller node
[Root @ controller ~] # Ping-c 4 www.baidu.com
PING www.a.shifen.com (112.80.248.73) 56 (84) bytes of data.
64 bytes from 112.80.248.73: icmp_seq = 1 ttl = 54 time = 36.2 MS
64 bytes from 112.80.248.73: icmp_seq = 2 ttl = 54 time = 36.8 MS
64 bytes from 112.80.248.73: icmp_seq = 3 ttl = 54 time = 36.3 MS
64 bytes from 112.80.248.73: icmp_seq = 4 ttl = 54 time = 36.2 MS
--- Www.a.shifen.com ping statistics ---
4 packets transmitted, 4 bytes ed, 0% packet loss, time 3004 ms
Rtt min/avg/max/mdev = 36.242/36.443/36.885/0.262 MS
2. ping the compute node from the controller node
[Root @ controller ~] # Ping-c 4 compute
PING compute (10.0.0.31) 56 (84) bytes of data.
64 bytes from compute (10.0.0.31): icmp_seq = 1 ttl = 64 time = 0.915 MS
64 bytes from compute (10.0.0.31): icmp_seq = 2 ttl = 64 time = 0.408 MS
64 bytes from compute (10.0.0.31): icmp_seq = 3 ttl = 64 time = 0.411 MS
64 bytes from compute (10.0.0.31): icmp_seq = 4 ttl = 64 time = 0.478 MS
--- Compute ping statistics ---
4 packets transmitted, 4 bytes ed, 0% packet loss, time 3001 ms
Rtt min/avg/max/mdev = 0.408/0.553/0.915/0.210 MS
3. ping the network node from the controller node
[Root @ controller ~] # Ping-c 4 network
PING network (10.0.0.21) 56 (84) bytes of data.
64 bytes from network (10.0.0.21): icmp_seq = 1 ttl = 64 time = 0.731 MS
64 bytes from network (10.0.0.21): icmp_seq = 2 ttl = 64 time = 0.389 MS
64 bytes from network (10.0.0.21): icmp_seq = 3 ttl = 64 time = 0.351 MS
64 bytes from network (10.0.0.21): icmp_seq = 4 ttl = 64 time = 0.382 MS
--- Network ping statistics ---
4 packets transmitted, 4 bytes ed, 0% packet loss, time 3000 ms
Rtt min/avg/max/mdev = 0.351/0.463/0.731/0.155 MS
4. ping a website from a network node
5. ping the control node from the Network Node
6. ping the computing node from the Network Node
7. ping a website from a compute Node
8. ping the control node from the compute Node
9. ping the network node from the compute Node
Network Time Protocol (NTP) for each node
Install the NTP service
# Yum install ntp
Configure the NTP service by default. The controller node synchronizes the time through the public server pool. However, you can configure the backup server in the/etc/ntp. conf file for your organization.
1. edit the file in/etc/ntp. conf.
Server NTP_SERVER iburst // if you do not have an internal NTP server, this will be unnecessary. The default number is enough.
Restrict-4 default kod notrap nomodify
Restrict-6 default kod notrap nomodify
Replace NTP_SERVER with an appropriate NTP server with a more accurate host name or IP address.
Restrict 10.0.0.0 mask 255.255.255.0
The IP address range of the NTP server.
2. Start the NTP service and configure it as self-starting with the System
# Systemctl enable ntpd. service
# Systemctl start ntpd. service
Install the NTP service on the other two nodes
# Yum install ntp
Configure the NTP service to configure the network and computing nodes to reference the Controller nodes. 1. edit the file in/etc/ntp. conf.
Server controller iburst
2. Start the NTP service and configure it as self-starting with the System
# Systemctl enable ntpd. service
# Systemctl start ntpd. service
Verification operation
We recommend that you check whether NTP is synchronized before further processing. Some nodes, especially those that reference the controller, may need several minutes of synchronization.
1. Run this command on the controller node
# Ntpq-c peers
Remote refid st t when poll reach delay offset jitter
========================================================== ==================================
=====
====
* Ntp-server1 192.0.2.11 2 u 169 1024 377 1.901-0.611
5.483
+ Ntp-server2 192.0.2.12 2 u 887 1024 377 0.922-0.246
2.864
2. Run this command on the controller node
# Ntpq-c assoc
Ind assid status conf reach auth condition last_event cnt
========================================================== ==============================
20487 961a yes none sys. peer sys_peer 1
20488 941a yes none candidate sys_peer 1
In the status bar, sys. peer should be used on at least one server.
3. Run the following command on other nodes
# Ntpq-c peers
Remote refid st t when poll reach delay offset jitter
========================================================== ==================================
=====
====
* Controller 192.0.2.21 3 u 47 64 37 0.308-0.251
0.079
4. Run the following command on other nodes
# Ntpq-c assoc
Ind assid status conf reach auth condition last_event cnt
========================================================== ==============================
1 21181 963a yes none sys. peer sys_peer 3
Security
The OpenStack Service supports various security methods, including password policies and encryption. In addition, supporting services, including database servers and message proxies, support at least secure passwords. To simplify the installation process, this Guide only applies to applicable security passwords. You can manually create security passwords or use tools such as pwgen to generate them, or run the following command:
$ Openssl rand-hex 10
For OpenStack services, this Guide uses SERVICE_PASS to reference the password and
SERVICE_DBPASS refers to the Database Password.
The following table lists the required passwords and services and their associations: