The previous section describes the architecture of Cinder, which discusses how Cinder components work together and how they are designed.
How cinder-* Sub-services work together from the volume creation process
For Cinder learning, Volume creation is a very good scene, involving each cinder-* sub-service, the following is a flowchart.
The customer (which can be an OpenStack end-user or other program) sends a request to the API (CINDER-API): "Create a volume for me"
After the API has made some necessary processing of the request, a message is sent to Messaging (RabbitMQ): "Let Scheduler create a volume"
Scheduler (Cinder-scheduler) Gets the message from Messaging to the API, then executes the scheduling algorithm and selects node A from several storage points
Scheduler sent a message to Messaging: "Let storage node A create this volume"
Storage Node A, Volume (Cinder-volume), gets the message Scheduler sent to it from Messaging and then creates driver on Volume provider through Volume.
The above is a few steps to create the core of the virtual machine, of course, omitted a lot of details, we will discuss in detail in later chapters.
The design idea of Cinder
Cinder continues the design ideas of Nova and other components.
API front-end services
Cinder-api as a unique window to the cinder component, exposing clients to the functionality that cinder can provide, when the customer needs to perform volume-related actions and can only send REST requests to CINDER-API. Customers here include end users, command lines, and other OpenStack components.
The benefits of designing API front-end services are:
Provide a unified interface to the external, hide implementation details
API provides REST standard invocation services for easy integration with third-party systems
High availability of APIs can be easily implemented by running multiple API service instances, such as running multiple CINDER-API processes
Scheduler Dispatch Service
Cinder can have multiple storage nodes, and when the volume needs to be created, Cinder-scheduler chooses the most appropriate node to create the volume based on the properties and resource usage of the storage node.
The scheduling service is like a project manager in a development team, and when a new development task is received, the project manager assigns the task to the most appropriate developer based on the difficulty of the task, the current workload and skill level of each team member.
Worker Work Services
The dispatch service just assigns tasks, and the worker work service is the real task. In Cinder, this Worker is cinder-volume. This functional division between Scheduler and workers makes OpenStack very easy to scale:
You can increase the storage node (increase Worker) when storage resources are insufficient. When the customer's request volume is too large to dispatch, you can increase the Scheduler.
Driver Frame
OpenStack is an open infrastracture as a Service cloud operating system that supports the industry's best-in-class technologies, which may be open source free or commercially charged. This open architecture allows OpenStack to remain technologically advanced and competitive without creating vendor lock-in (lock-in). Where does this openness of OpenStack manifest itself? An important aspect is the adoption of a Driver-based framework.
In Cinder, for example, storage nodes support a variety of volume provider, including LVM, NFS, Ceph, GlusterFS, and commercial storage systems such as EMC and IBM. Cinder-volume defines a unified driver interface for these volume provider, and volume provider can plug and play into OpenStack in the form of driver simply by implementing these interfaces. The following is the architecture of the cinder driver:
In Cinder-volume configuration file/etc/cinder/cinder.conf volume_driver the configuration item sets which volume provider the storage node uses, the following example indicates that LVM is used.
In the next section we will discuss each component of Cinder in detail.
Mastering Cinder Design Ideas-5 minutes a day to play OpenStack (46)