Openstack architecture-Keystone component (1)

Source: Internet
Author: User
Tags create domain rabbitmq etcd qpid chrony

This blog post mainly builds the keystone component in the openstack architecture, and then it will successively manage glance, Nova, neutron, Horizon, cinder, and virtual machines in openstack. Before deploying the experiment, you must first learn about openstack!

What is openstack?

Openstack is a community, a project, and an open source software. It provides open source software and establishes Public and Private clouds. It provides an operating platform or tool set for cloud deployment, its purpose is to help organizations run virtual computing or storage services on the cloud, provide scalable and flexible cloud computing for public clouds, Private clouds, and big and small clouds.
The openstackd open-source project is maintained by the community, including openstack computing (codenamed Nova), openstack Object Storage (codenamed SWIFT), and openstack Image Service (codenamed glance. Openstack provides an operating platform or toolkit for cloud orchestration.

Openstack components:

Openstack currently has three main components: computing, storage, and image.

  • Openstack computing is a cloud controller used to start a user or a virtual instance in a group. It is also used to configure multiple instances in each instance or project to connect to a specific project.
  • Openstack object storage is a system that stores objects in large-capacity systems with built-in redundancy and fault tolerance. Object Storage has a variety of applications, such as backing up or archiving data, storing graphics or videos, and storing Level 2 or level 3 static data, developing new applications for integration with data storage .?
  • Openstack image service is a search and virtual machine image retrieval system. It can be configured in three ways: Using openstack object storage to store images; using Amazon S3 for direct storage, or using S3 object storage as S3 for access to intermediate storage.
    Openstack Composition

    The entire openstack consists of control nodes, computing nodes, network nodes, and storage nodes.
    Control Node: controls other nodes, including virtual machine creation, migration, network allocation, and storage allocation.
    Computing node: responsible for running virtual machines
    Network nodes: responsible for communication between external networks and internal networks
    Storage node: Responsible for additional storage management of virtual machines, etc.

    Control node architecture

    Control nodes include the following services:

  • Management Support Service
  • Basic Management Service
  • Extended Management Service
    1) management support services include MySQL and qpid
    MySQL: where the database is stored as the data generated by the Basic/extended service
    Qpid: Message proxy (also called message-oriented middleware) provides unified messaging services for various other services.
    2) Basic management services include five services: Keystone, glance, Nova, neutron, and horizon.
    Keystone: the authentication management service, which provides the management, creation, modification, and other authentication information/tokens for all other components. It uses MySQL as a unified database.
    Glance: Image Management Service, which manages images that can be provided during Virtual Machine deployment, including image import, format, and corresponding templates.
    NOVA: A computing management service that manages the Nova of a computing node and uses the NOVA-API for communication.
    Neutron: a network management service that provides network topology management for network nodes and a Management Panel for neutron on Horizon.
    Horizon: a console service that provides Web-based management of all services on all nodes. Generally, this service is called a dashboard.
    3) The extended management service includes five services: cinder, SWIFT, trove, heat, and centimeter.
    Cinder: Provides cinder-related management of storage nodes, and provides the management panel of cinder in horizon.
    SWIFT: Provides swift-related management of storage nodes and the Management Panel of SWIFT in horizon.
    Trove: Provides trove-related management for database nodes, and provides the management panel of trove in horizon.
    Heat: provides basic operations such as initialization, dependency processing, and deployment of resources in the cloud environment based on templates. It can also solve advanced features such as automatic contraction and Server Load balancer.
    Centimeter: monitors physical and virtual resources, records the data, analyzes the data, and triggers corresponding actions under certain conditions. Lab environment:

    In this experiment, three virtual machines are required, including control nodes (including Image Service), computing nodes, and storage nodes. We recommend that you configure two CPUs for the three virtual machines and set the memory to 4 GB.

Host System IP address Role
Controller Centos7 192.168.37.128 Keystone, NTP, mariadb, rabbitmq, memcached, etcd, Apache
Compute Centos7 192.168.37.130 Nova, NTP
Cinder Centos7 192.168.37.131 Cinder, NTP
Experiment process: 1. Environment preparation (three virtual machines)

1. Disable the firewall and disable SELinux.

Systemctl stop firewalld. Service
Setenforce 0

2. Modify the host names respectively.

Hostnamectl set-hostname Controller# Control Node
Bash
Hostnamectl set-hostname compute# Computing nodes
Bash
Hostnamectl set-hostname Cinder# Storage Node
Bash

3. Modify the hosts file

Vim/etc/hosts
192.168.37.128 Controller
192.168.37.130 compute
192.168.37.131 Cinder

4. Test node connectivity

Ping-C 4 openstack.org# Send 4 packages to test China Unicom Official Website
Ping-C 4 compute
Ping-C 4 openstack.org# Computing node Test
Ping-C 4 Controller
Ping-C 4 openstack.org# Storage node Test
Ping-C 4 Controller

5. Back up the default Yum Source

MV/etc/yum. Repos. d/CentOS-Base.repo/etc/yum. Repos. d/CentOS-Base.repo.backup

6. download the latest Yum Source

Wget-O/etc/yum. Repos. d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

7. Install the required openstack Software Package

Yum install centos-release-openstack-Queens-y
Yum upgrade-y# Update a software Repository
Yum install Python-openstackclient-y
Yum install openstack-SELinux-y

Ii. Configure the NTP clock service

# Controller node ##
1. Install chrony software package in yum

Yum install chrony-y

2. Modify the chrony configuration file

Vim/etc/chrony. conf # insert server controller iBurst at the beginning of the file # synchronize time from all nodes at the time source to the controller node allow 192.168.37.0/24 # set the time synchronization network segment

3. Enable the NTP service

Systemctl enable chronyd
Systemctl stop chronyd
Systemctl start chronyd
# Because the chrony service itself is automatically started upon startup, You need to disable it and restart it.

# Other node configurations ##
1. Install chrony software package in yum

Yum install chrony-y

2. Modify the chrony configuration file

Vim/etc/chrony. conf server controller iBurst # synchronize Controller

3. enable the Service

Systemctl stop chronyd
Systemctl start chronyd

4. Verify the Clock Synchronization Service on the Controller

Chronyc sources

Iii. Database deployment (controller node)

1. Install mariadb in yum

Yum install mariadb-server python2-PyMySQL-y

2. Modify the mariadb configuration file

Vim/etc/My. CNF. d/mariadb-server.cnf [mysqld] datadir =/var/lib/mysqlsocket =/var/lib/MySQL. socklog-error =/var/log/mariadb. logpid-file =/var/run/mariadb. pid # The following is the new content bind-address = 192.168.37.128 # Bind Address controllerdefault-storage-engine = InnoDB # default storage engine innodb_file_per_table = on # max_connections = 4096 # maximum connection collation-Server = utf8_general_ci # Character Set character-set-Server = utf8

3. Enable the mariadb service and set to enable auto-start

Systemctl enable mariadb. Service
Systemctl start mariadb. Service

4. Basic Database settings

Mysql_secure_installation
# Basic settings: Press enter in addition to ABC123.

Iv. rabbitmq service deployment (controller node)

1. Install the rabbitmq-server package in yum

Yum install rabbitmq-server-y

2. Enable the rabbitmq service and set to enable auto-start

Systemctl enable rabbitmq-server.service
Systemctl start rabbitmq-server.service

3. After the service is restarted, add users and permissions

Rabbitmqctl add_user openstack 123456# Add a user
Rabbitmqctl set_permissions openstack ".""."".*"

5. memcached service deployment (controller node)

1. Install the memcached package in yum

Yum install memcached Python-memcached-y

2. Modify the memcached configuration file

Vim/etc/sysconfig/memcachedport = "11211" user = "memcached" maxconn = "1024" cachesize = "64" Options = "-l 192.168.37.128 ,:: 1 "# modify the listening IP Address

3. Enable the memcached service and enable auto-start

Systemctl enable memcached. Service
Systemctl start memcached. Service

Vi. Deployment of etcd service discovery mechanism (controller node)
  • Etcd is a highly available Distributed Key-value database.
  • For service discovery, servicediscovery solves one of the most common problems in a distributed system, that is, how can a process or service in the same distributed cluster find the other party and establish a connection?

1. Install the etcd package in yum

Yum install etcd-y

2. Modify the etcd configuration file. The result is as follows:

Etcd_initial_cluster # enable the cluster function: matches all URL addresses (public, admin, and internal) in the group against [member] etcd_data_dir = "/var/lib/etcd/default. etcd "# file storage location etcd_listen_peer_urls =" http: // 192.168.37.128: 2380 "# Listen to the cluster server address etcd_listen_client_urls =" http: // 192.168.37.128: 2379 "# declaring the client address etcd_name =" controller "[clustering] # matching the cluster address etcd_initial_advertise_peer_urls =" http: // 192.168.37.128: 2380 "# Controlling terminal address etcd_advere_tisent_urls =" HTTP: // 192.168.37.128: 2379 "# client address etcd_initial_cluster =" Controller = http: // 192.168.37.128: 2380 "# Cluster name setting etcd_initial_cluster_token =" etcd-cluster-01 "# token setting etcd_initial_cluster_state =" new"


3. Enable the etcd service and set auto-start upon startup.

Systemctl enable etcd. Service
Systemctl start etcd. Service

VII. Keystone authentication (controller node)

1. Create a separate database Keystone, declare the user and authorize

Mysql-uroot-P# Password ABC123
Create Database keystone;
Grant all privileges on Keystone.To 'keystone '@ 'localhost' identified by '123 ';# Local user authorization
Grant all privileges on Keystone.
To 'keystone '@' % 'identified by '123 ';
Flush privileges;# Other user authorization

2. Yum installation package

Yum install openstack-Keystone httpd mod_wsgi-y

3. Edit the keystone configuration file

Vim/etc/keystone. conf [database] #737 line connection = MySQL + pymysql: // keystone: [email protected]/keystone [token] #2922 line provider = Fernet # Secure Message Passing Algorithm

4. Synchronize Databases

Su-S/bin/sh-c "keystone-manage db_sync" Keystone

5. initialize the database

Keystone-manage fernet_setup -- keystone-user keystone -- keystone-group keystone
Keystone-manage credential_setup -- keystone-user keystone -- keystone-group keystone

6. Set a password for the administrator and register three access methods

Keystone-manage Bootstrap -- Bootstrap-Password 123456 \
-- Bootstrap-admin-URL http: // Controller: 35357/V3 /\
-- Bootstrap-Internal-URL http: // Controller: 5000/V3 /\
-- Bootstrap-public-URL: http: // Controller: 5000/V3 /\
-- Bootstrap-region-ID regionone

VIII. apache service deployment

1. Edit the httpd configuration file

vim /etc/httpd/conf/httpd.confServerName controller

2. Establish a soft connection to enable Apache to identify keystone

Ln-S/usr/share/keystone/wsgi-keystone.conf/etc/httpd/CONF. d/

3. Enable the apache service and set Automatic startup

Systemctl enable httpd. Service
Systemctl start httpd. Service

4. Declare Environment Variables

Export OS _username = Admin
Export OS _password = 123456
Export OS _project_name = Admin
Export OS _user_domain_name = default
Export OS _project_domain_name = default
Export OS _auth_url = http: // Controller: 35357/v3
Export OS _identity_api_version = 3

9. Create a demo platform for management

1. Create domain

Openstack domain create -- Description "Domain" Example

2. Create a Project Service Project

Openstack project create -- domain default -- Description "Service Project" Service

3. Create a platform Demo project

Openstack project create -- domain default -- Description "Demo project" demo

4. Create a demo user

Openstack user create -- domain default -- password-Prompt demo
# Enter Password: 123456

5. Create a User Role

Openstack role create user

6. Add User roles to demo projects and users

Openstack role add -- Project demo -- User demo user

10. Verify keystone operations

1. Cancel Environment Variables

Unset OS _auth_url OS _password

2. the token returned by the admin user

Openstack -- OS-auth-URL http: // Controller: 35357/V3 \
-- OS-project-Domain-Name default -- OS-user-Domain-Name Default \
-- OS-project-Name Admin -- OS-username admin token issue
# Password 123456


3. the token returned by the demo user

Openstack -- OS-auth-URL http: // Controller: 5000/V3 \
-- OS-project-Domain-Name default -- OS-user-Domain-Name Default \
-- OS-project-name demo -- OS-username demo token issue
# Password 123456

4. Create the admin-openrc script

vim admin-openrcexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=123456export OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

5. Create a demo-openrc script

vim demo-openrcexport OS_PROJECT_DOMAIN_NAME=Defaultexport OS_USER_DOMAIN_NAME=Defaultexport OS_PROJECT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=123456export OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

6. Use a script to return the authentication token.

Source ~ /Admin-openrc
Openstack token issue

Openstack architecture-Keystone component (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.