OpenStack Storage Module-cinder
Some of the basics of OpenStack are described in the first two documents, but they are not included in the enclosure. This article mainly introduces the storage module cinder of OpenStack.
Storage is divided into three main types:
Block storage: Hard drives, storage devices, magnetic consolidation columns, etc.
File storage: Storage that is used primarily for file sharing, such as NFS,FTP.
Object storage: Distributed File system, Swift, etc. There are metadata (metadata) data descriptions as supported storage methods.
Cinder supports a variety of storage methods above.
Cinder components
CINDER-API: Accepts the API request and routes it to Cinder-volume to execute.
Cinder-volume: Responds to requests, reads or writes database maintenance state information, interacts with other processes through the Message Queuing mechanism (such as cinder-scheduler), or interacts directly with the hardware or software provided by the upper block storage. Manage Storage.
Cinder-scheduler: Daemon. Similar to Nova-scheduler, select the optimal block storage node for the storage instance.
Cinder-backup: Daemon. The service provides any kind of backup volume to a backup storage provider. Like the ' cinder-volume ' service, it interacts with a variety of storage providers in the driver architecture.
Cinder database configuration and registration services
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder-controller-install.html
Create a database and authorize:
MariaDB [(None)]> CREATE DATABASE Cinder; MariaDB [(None)]> GRANT all privileges on cinder.* to ' cinder ' @ ' localhost ' identified by ' cinder '; MariaDB [(None)]> GRANT all privileges on cinder.* to ' cinder ' @ '% ' identified by ' cinder ';
To create a service certificate, complete these steps:
# source Admin-openstack.sh # OpenStack user Create--domain default--password-prompt cinder
To add the admin role to the cinder User:
# OpenStack role Add--project service--user Cinder admin
Create cinder and CINDERV2 service entities:
# openstack service create --name cinder --description "OpenStack block storage " volume+-------------+----------------------------------+| field | Value |+ -------------+----------------------------------+| description | openstack block storage | | enabled | True | | id | 27b797388aaa479ea5542048df32b3d8 | | name | cinder | | type | volume |+-------------+----------------------------------+
# openstack service create --name cinderv2 --description "OpenStack Block storage " volumev2+-------------+----------------------------------+| field | Value |+------ -------+----------------------------------+| description | openstack block storage | | enabled | True | | id | 85f9890df5444a5d9a989c96b630c7a7 | | name | cinderv2 | | type | volumev2 |+-------------+----------------------------------+
To create an API entry point for the block device storage service, you need to register two versions:
OpenStack Endpoint Create--region regionone volume public http://172.16.10.50:8776/v1/%\ (tenant_id\) s OpenStack Endpoint Create--region Regionone volume internal http://172.16.10.50:8776/v1/%\ (tenant_id\) s OpenStack Endpoint Create--region regionone Volume admin http://172.16.10.50:8776/v1/%\ (tenant_id\) s OpenStack endpoint Create--region Re Gionone Volumev2 public http://172.16.10.50:8776/v2/%\ (tenant_id\) s OpenStack endpoint Create--region Regionone Volumev2 internal http://172.16.10.50:8776/v2/%\ (tenant_id\) s OpenStack endpoint create--region regionone Volumev2 Admin http://172.16.10.50:8776/v2/%\ (tenant_id\) s
Cinder installation Configuration
To install the Cinder component on the control node:
# yum Install-y Openstack-cinder
Edit the/etc/cinder/cinder.conf and complete the following actions:
Configuration database (password is cinder):
Connection = Mysql+pymysql://cinder:[email Protected]/cinder
Synchronize the database for the Block device service:
# su-s/bin/sh-c "Cinder-manage db sync" cinder
Confirm that the database synchronization was successful:
# mysql-h 172.16.10.50-UCINDER-PCINDER-E "use cinder;show tables;"
In the "[DEFAULT]" and "[Oslo_messaging_rabbit]" sections, configure the "RabbitMQ" Message Queuing access:
[Default]rpc_backend = Rabbit
[oslo_messaging_rabbit]...rabbit_host = 172.16.10.50rabbit_userid = Openstackrabbit_password = OpenStack
In the "[DEFAULT]" and "[Keystone_authtoken]" sections, configure authentication Service access:
[default]auth_strategy = Keystone[keystone_authtoken]...auth_uri = Http://172.16.10.50:5000auth_url =/http 172.16.10.50:35357memcached_servers = 172.16.10.50:11211auth_type = Passwordproject_domain_name = Defaultuser_domain _name = Defaultproject_name = Serviceusername = Cinderpassword = Cinder
In the [oslo_concurrency] section, configure the lock path:
Lock_path =/var/lib/cinder/tmp
Edit the file/etc/nova/nova.conf and add the following to it:
[Cinder]os_region_name= Regionone
Restart Nova-api:
# systemctl Restart Openstack-nova-api.service
Start Cinder-api (port 8776) and Cinder-scheduler:
# Systemctl Enable Openstack-cinder-api.service openstack-cinder-scheduler.service# systemctl start Openstack-cinder-api.service Openstack-cinder-scheduler.service
Installing the Configuration Storage node
Storage nodes can be configured on compute nodes, or there can be other private storage services, where the computer node is used to provide storage services.
A plug-in disk is required on the compute node.
Create an LVM physical volume /dev/sdb :
Create an LVM volume group Cinder-volumes :
# pvcreate/dev/sdb Physical Volume "/dev/sdb" Successfully created# vgcreate cinder-volumes/dev/sdb volume Group "cind Er-volumes "successfully created
Set only instances can access block storage volume groups:
By default, the LVM Volume Scan tool scans the '/dev ' directory for the block storage device that contains the volume. If the project uses LVM on their volumes, the scan tool will attempt to cache the volumes as they are detected, potentially creating a variety of problems on the underlying operating system and on the project volume. LVM must be reconfigured so that it scans only devices that contain the "Cinder-volume" volume group. Edit the '/etc/lvm/lvm.conf ' file and do the following:
In the ' Devices ' section, add a filter that only accepts '/dev/sdb ' devices and rejects all other devices:
Devices {... filter = ["a/sdb/", "r/.*/"]
the elements in each filter group start with "a", that is, accept, or R begins with a regular expression rule that is **reject** and includes a device name. Filter groups must end With ' r/.*/ ' to filter all reserved devices. You can use : command: ' VGS-VVVV ' to test the filter.
Install the configuration on the storage node cinder
To install the package:
Yum install-y openstack-cinder targetcli python-keystone
To configure the cinder of a storage node:
The cinder configuration on the storage node is not very different from the configuration on the control node, and you can copy and modify permissions directly from the control node:
# scp/etc/cinder/cinder.conf 172.16.10.51:/etc/cinder/
In the cinder.conf ' [LVM] ' section, configure the LVM backend to end with LVM Drive, Volume group ' Cinder-volumes ', iSCSI protocol and the correct iSCSI service if no this module can be manually added:
[Lvm]volume_driver = Cinder.volume.drivers.lvm.LVMVolumeDrivervolume_group = Cinder-volumesiscsi_protocol = Iscsiiscsi_helper = Lioadm
in the [DEFAULT] section, enable the LVM backend:
Enabled_backends = LVM
Start the block storage volume service and its dependent services:
# Systemctl Enable Openstack-cinder-volume.service target.service# systemctl start Openstack-cinder-volume.service Target.service
On the control node, verify that the configuration is successful, that the up state is up, and that the cloud drive cannot be added if it is not up:
# source admin-openstack.sh # cinder service-list+------------------+---------- -+------+---------+-------+----------------------------+-----------------+| binary | host | zone | Status | State | updated_at | disabled reason |+----------- -------+-----------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | node1 | nova | enabled | up | 2016-11-02T09:16:34.000000 | - | | cinder-volume | [email protected] | nova | enabled | up | 2016-11-02t09:16:39.000000 | - |+------------------+---- -------+------+---------+-------+----------------------------+-----------------+
Add a volume to a virtual machine
Http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/launch-instance-cinder.html
Previously, Horizon has been installed and can be added directly to the dashboard with cloud drive operations. You can also add a cloud drive based on the Official document command line.
If the cloud drive information appears in the Web management interface, the add success is indicated.
650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M01/89/AF/wKioL1gZ0zujHBD-AABwHJy0spk718.jpg "title=" Bd97bc7d-4de9-4e53-9988-b1e5485c3602.jpg "alt=" Wkiol1gz0zujhbd-aabwhjy0spk718.jpg "/>
To see if this virtual hard disk is on a virtual machine:
$ sudo fdisk-ldisk/dev/vdb:1073 MB, 1073741824 bytes16 heads, sectors/track, 2080 cylinders, total 2097152 Sectorsun its = sectors of 1 * bytessector size (logical/physical): bytes/512 bytesi/o size (minimum/optimal): + b ytes/512 Bytesdisk identifier:0x00000000
Format the hard drive and Mount:
$ sudo fdisk/dev/vdb n p w$ sudo mkfs.ext4/dev/vdb1$ sudo mount/dev/vdb1/data/
You can add a cloud drive to a running virtual machine, and it is not recommended to dynamically expand or shrink hard disk capacity, which can result in data loss. In a real-world production environment, it is not recommended to use the various complex features of cinder.
This article is from the "Trying" blog, make sure to keep this source http://tryingstuff.blog.51cto.com/4603492/1868674
OpenStack Build (iii)