Understanding the open source distributed storage System for CEPH enhanced OpenStack cloud computing environment
Source: Internet
Author: User
KeywordsCloud computing open source openstackceph distributed storage systems
Ceph is an open source, unified, distributed storage system that provides an easy way to deploy storage platforms that contain commercial hardware, low-cost, and scalable. Learn how to create a Ceph cluster that implements objects, blocks, and file storage from a single point, Ceph algorithms and replication mechanisms, and how to integrate it with your cloud data architecture and models. The author proposes a simple and powerful method to integrate Ceph cluster into OpenStack ecosystem.
Ceph is a POSIX-compliant (Portable http://www.aliyun.com/zixun/aggregation/10963.html ">operating System for UNIX®"), Open source distributed storage systems run under the GNU general Public License. Originally developed by Sage Weill in 2007, the project was founded on the idea of proposing a cluster without any single point of failure to ensure permanent data replication across cluster nodes.
As in any classic Distributed file system, the files placed in the cluster are striped, and a pseudo-random data distribution algorithm called Ceph controlled Replication Under Scalable hashing (CRUSH) is placed in the cluster node.
Ceph is an interesting storage alternative, thanks to some of the concepts it implements, such as metadata partitioning, and a copy or drop Group Policy (aggregating a series of objects into a group and then mapping that grouping to a series of object storage Daemon (OSD)).
These features support automatic scaling, recovery, and self-management of clusters because they use the following bindings (at different levels) to provide interaction with your Ceph cluster:
The Reliable autonomic Distributed Object Store (Rados) gateway is a RESTful interface that your application can communicate with in order to store objects directly in the cluster. The Librados library is a convenient way to access Rados, which supports PHP, Ruby, Java™, Python, and C + + programming languages. The Ceph Rados block device (RBD) is a fully distributed block device that uses a linux® kernel and a Quick emulator (QEMU)/kernel based virtual machine (KVM) driver. Native CEPHFS is a distributed file system that fully supports filesystem in userspace (FUSE).
As shown in Figure 1, the Ceph ecosystem can be decomposed into 5 components:
Librados Library Rados Gateway RBD CEPHFS cluster of nodes
Figure 1. Ceph Ecosystem
The Ceph ecosystem natively supports many ways of interacting with it, making it easy and convenient to integrate it into a running infrastructure, even if it performs a complex task of providing block and object storage functionality in a unified project file.
Let's take a look at the parts that make up the Ceph and the roles they play separately in Ceph.
Rados Object Storage
Figure 1 shows that Rados object storage is the foundation of a storage cluster. For each operation performed through numerous clients or gateways (RADOSGW, RBD, or CEPHFS), the data enters Rados or can be read from. Figure 2 shows the Rados cluster, which contains two daemon processes: the Ceph object store background process (OSD) and the Ceph Monitor that maintains the primary copy of the cluster map.
Figure 2. The Rados object is stored
The cluster mapping describes the physical location of the object block and a bucket list that aggregates the devices into physical locations. The mapping is controlled by the Advanced Placement algorithm of CEPH, which models the logical position on the physical location. Figure 3 depicts the "pool" within the cluster, the logical partition where the object is stored. Each pool is dynamically mapped to the OSD.
Figure 3. Rados Position Grouping
Now let's look at the first set of background process OSD, then look at the monitor, and finally look at the Ceph metadata server belonging to the CEPHFS Distributed file system.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.