Starting from this section we learn about OpenStack's Block Storage service,cinder
Understanding Block Storage
There are generally two ways that the operating system obtains storage space:
Mount the bare hard drive by some protocol (SAS,SCSI,SAN,ISCSI, etc.), then partition, format, create the file system, or directly use the bare hard disk to store data (database)
Mount remote File system via NFS, CIFS, and other protocols
The first type of bare hard drive is called Block Storage (Block storage), and each bare hard drive is often referred to as Volume (volume) and the second is called File system storage. NAS and NFS servers, as well as various distributed file systems, provide this kind of storage.
Understanding Block Storage Service
Block Storage Servicet provides management of volume from creation to deletion throughout the life cycle.
From the instance point of view, each of the mounted Volume is a hard disk.
OpenStack provides a Block Storage Service that is Cinder, with the specific features:
Provides the REST API to enable users to query and manage volume, volume snapshot, and volume type
Provide scheduler dispatch volume create request, reasonably optimize the allocation of storage resources
Supports multiple back-end (back-end) storage through the driver architecture, including lvm,nfs,ceph and other commercial storage products and scenarios such as EMC and IBM
Cinder Architecture
is the logical architecture diagram of cinder
The Cinder contains several components as follows:
Cinder-api
Receives an API request and invokes Cinder-volume to perform the operation.
Cinder-volume
Manage volume services, coordinate with volume provider, and manage the volume life cycle. The node that runs the Cinder-volume service is called a storage node.
Cinder-scheduler
Scheduler selects the most suitable storage node by scheduling algorithm to create volume.
Volume provider
Data storage device that provides physical storage space for volume. Cinder-volume supports a variety of volume provider, each of which volume provider work in coordination with driver through its own cinder-volume.
Message Queue
Cinder each sub-service realizes interprocess communication and collaboration through Message Queuing. This loosely structured structure is also an important feature of distributed systems because of the message queue and the decoupling of the sub-service implementations.
Database Cinder Some data need to be stored in the database, generally use MySQL. The database is installed on the control node, for example, in our experimental environment, a database named "Cinder" can be accessed.
Physical Deployment Scenarios
Cinder services are deployed on two types of nodes, control nodes and storage nodes. Let's see what cinder-* sub-services are running on the control node Devstack-controller.
Cinder-api and Cinder-scheduler are deployed on the control node, which is reasonable.
As for Cinder-volume also on the control node may be some students will be confused: Cinder-volume should not be deployed on the storage node?
To answer this question, start by figuring out the fact that OpenStack is a distributed system where each sub-service can be deployed anywhere, as long as the network can connect.
Either node, as long as the cinder-volume is running on it, it is a storage node, and of course, other OpenStack services can be run on that node.
Cinder-volume is a storage node hat, CINDER-API is a control node hat. In our environment, devstack-controller wear both hats at the same time, so it is both a control node and a storage node. Of course, we can also use a dedicated node to run Cinder-volume.
This once again demonstrates the flexibility of OpenStack distributed architecture deployment: All services can be placed on a single physical machine as a all-in-one test environment, while services can be deployed on multiple physical machines in a production environment for better performance and high availability.
RabbitMQ and MySQL are usually placed on the control node.
Alternatively, you can use the Cinder service list to see which nodes the cinder-* sub-service is distributed on
There is one more question: volume provider put in there?
Generally speaking, volume provider is independent. Cinder-volume uses driver to communicate with volume provider and coordinate work. So you just have to put driver and cinder-volume together. In the Cinder-volume source code directory There are many driver, support different volume provider.
We will discuss the use of Cinder-volume in the following two volume provider, LVM and NFS, and other volume provider can view the configuration documentation for OpenStack.
In the next section we will discuss how these components of Cinder work in harmony.
Understanding the Cinder architecture-5 minutes a day to play with OpenStack (45)