In this section we will explain in detail the various sub-services of Cinder.
Cinder-api
Cinder-api is the portal for the entire cinder component, and all cinder requests are processed first by NOVA-API. Cinder-api exposes several HTTP REST API interfaces to the outside world. In Keystone we can query the endponits of CINDER-API.
The client can send the request to the address specified by Endponits and request the operation to the CINDER-API. Of course, as an end user we don't send a Rest API request directly. These APIs are used by OpenStack Cli,dashboard and other components that need to be swapped with Cinder.
The HTTP API requests received by the CINDER-API docking are handled as follows:
Check if the parameters of the client-to-descendant are valid
Call processing client requests Cinder other sub-services
Returns the result sequence number of the cinder other child service and returns it to the client
What requests does Cinder-api accept? Simply put, CINDER-API can respond as long as it is a Volume life-cycle-related operation. Most operations can be seen on Dashboard.
Open the Volume management interface
Click the drop-down arrow and the list is the action that CINDER-API can perform.
Cinder-scheduler
When Volume is created, Cinder-scheduler chooses the most appropriate storage node based on conditions such as capacity, Volume Type, and then lets it create Volume.
This section is much more, and we'll discuss it separately next time.
Cinder-volume
The cinder-volume runs on the storage node, and OpenStack operates on the volume, which is finally given to Cinder-volume. Cinder-volume itself does not manage real storage devices, and storage devices are managed by volume provider. Cinder-volume with volume provider to achieve volume life cycle management.
supports multiple Volume with Driver architecture Provider
The next question is: Now there are so many pieces of storage products and solutions on the market (volume provider), Cinder-volume how to cooperate with them?
This is the Driver architecture we discussed earlier. Cinder-volume defines a unified interface for these volume provider, and volume provider can plug and play into the OpenStack system in the form of Driver simply by implementing these interfaces. The following is the architecture of the Cinder Driver:
We can see in the/opt/stack/cinder/cinder/volume/drivers/directory the Driver of the OpenStack source code that has come with a lot of volume provider:
The storage node configures the driver used with the Volume_driver option in the configuration file/etc/cinder/cinder.conf:
Here LVM is the volume provider we use.
periodically report the status of compute nodes to OpenStack
In the front cinder-scheduler will use Capacityfilter and Capacityweigher, both of which are filtered by the storage node's idle capacity. So here's a question: How does Cinder know the free capacity information for each storage node?
The answer is:Cinder-volume will report to cinder regularly .
From the Cinder-volume log/opt/stack/logs/c-vol.log can be found every time, Cinder-volume will report the current storage node resource usage.
Because the storage node uses LVM in our experimental environment, the above log sees the storage node acquiring the capacity usage information of the LVM through the "VGs" and "LVS" commands.
achieve Volume lifecycle management
Cinder's management of the life cycle of volume is ultimately accomplished through Cinder-volume, including volume's Create, extend, attach, snapshot, delete, and so on, which we will discuss in detail later.
In the next section we will discuss in detail how Cinder-scheduler filters Cinder-volume.
Cinder Component Details-5 minutes a day to play OpenStack (47)