How to integrate the Ceph storage cluster into the OpenStack cloud

Source: Internet
Author: User
Tags file system

Learn about Ceph, an open source distributed storage system that enhances your OpenStack environment

Ceph is an open-source, distributed storage System that complies with POSIX (Portable operating system for UNIX) and runs under the GNU general Public License. Originally developed by Sage Weill in 2007, the project was founded on the idea of proposing a cluster without any single point of failure to ensure permanent data replication across cluster nodes.

As in any classic Distributed file system, the files placed in the cluster are striped, and a pseudo-random data distribution algorithm called Ceph controlled Replication Under Scalable hashing (CRUSH) is placed in the cluster node.

Ceph is an interesting storage alternative, thanks to some of the concepts it implements, such as metadata partitioning, and a copy or drop Group Policy (aggregating a series of objects into a group and then mapping that grouping to a series of object storage Daemon (OSD)).

These features support automatic scaling, recovery, and self-management of clusters because they use the following bindings (at different levels) to provide interaction with your Ceph cluster:

The reliable autonomic distributed object Store (Rados) gateway is a RESTful interface that your application can communicate with in order to store objects directly in the cluster.

The Librados library is a convenient way to access Rados, which supports PHP, Ruby, Java, Python, and C + + programming languages.

The Ceph Rados block device (RBD) is a fully distributed block device that uses a Linux kernel and a Quick emulator (QEMU)/kernel based virtual machine (KVM) driver.

Native CEPHFS is a distributed file system that fully supports filesystem in userspace (FUSE).

As shown in Figure 1, the Ceph ecosystem can be decomposed into 5 components:

Librados Library

Rados Gateway

Rbd

Cephfs

Various nodes in the cluster

Figure 1. Ceph Ecological System

The Ceph ecosystem natively supports many ways of interacting with it, making it easy and convenient to integrate it into a running infrastructure, even if it performs a complex task of providing block and object storage functionality in a unified project file.

Let's take a look at the parts that make up the Ceph and the roles they play separately in Ceph.

Rados Object Storage

Figure 1 shows that Rados object storage is the foundation of a storage cluster. For each operation performed through numerous clients or gateways (RADOSGW, RBD, or CEPHFS), the data enters Rados or can be read from. Figure 2 shows the Rados cluster, which contains two daemon processes: the Ceph object store background process (OSD) and the Ceph Monitor that maintains the primary copy of the cluster map.

Figure 2. The Rados object is stored

The cluster mapping describes the physical location of the object block and a "bucket" list that aggregates the devices into physical locations. The mapping is controlled by the Advanced Placement algorithm of CEPH, which models the logical position on the physical location. Figure 3 depicts the "pool" within the cluster, the logical partition where the object is stored. Each pool is dynamically mapped to the OSD.

Figure 3. Rados Position Grouping

Now let's look at the first set of background process OSD, then look at the monitor, and finally look at the Ceph metadata server belonging to the CEPHFS Distributed file system.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.