Ceph provides three storage methods: Object Storage, block storage, and file system. The following figure shows the architecture of the Ceph storage cluster:We are mainly concerned about block storage. In the second half of the year, we will gradually transition the virtual machine backend storage from SAN to Ceph. although it is still version 0.94,
First, pre-installation preparation
1.1 Introduction to installation Environment
It is recommended to install a Ceph-deploy management node and a three-node Ceph storage cluster to learn ceph, as shown in the figure.
I installed the Ceph-deploy on the Node1.
First three machines were prepared, the names of which wer
Document directory
1. Design a CEpH Cluster
3. Configure the CEpH Cluster
4. Enable CEpH to work
5. Problems Encountered during setup
Appendix 1 modify hostname
Appendix 2 password-less SSH access
CEpH is a relatively new Distributed File System completed by the USSC storage team. It is a Network File System
Deployment Installation
Regarding the installation ceph the whole process encounters the question, as well as the reliable solution, the personal test is effective, does not represent the general colleague's viewpoint.I am using the server, so I did not engage in any user issues. The machine is centOS7.3. The Ceph version I installed is jewel and currently uses only 3 nodes.Node IP naming role10.0.1.92 e10
in Docker!PrincipleIt is a controversial topic to run Ceph in Docker, and many people question the meaning of such operations. While the detection module, metadata server, and Rados gateway are not much of a problem to containerized, things can become tricky for the OSD (Object Storage daemon). The Ceph OSD is optimized for physical machines and has many associations with the underlying
This is a creation in
Article, where the information may have evolved or changed.
In the article "using Ceph RBD to provide storage volumes for kubernetes clusters," we learned that one step in the integration process for kubernetes and Ceph is to manually create the RBD image below the Ceph OSD pool. We need to find a way to remove this manual step. The first th
: Ceph Monitor maintains the cluster map status, including monitor map, OSD map, Placement Group (PG) map, and CRUSH map. ceph maintains Ceph Monitors, Ceph OSD Daemons, and historical records of PGs status changes (called an "epoch ").
? MDSs: Metadata Stored by Ceph Metada
Ceph Client:Most ceph users do not store objects directly into the Ceph storage cluster, and they typically choose one or more of the Ceph block devices, the Ceph file system, and the Ceph object storage;Block device:To practice t
Ceph monitoring Ceph-dash Installation
There are a lot of Ceph monitors, such as calamari or inkscope. When I started to try to install these, they all failed, and then Ceph-dash came into my eyes, according to the official description of Ceph-dash, I personally think it is
Today, Ceph is configured, referencing the official document address of the multiparty document http://docs.ceph.com/docs/master/rados/configuration/ceph-conf/#the-configuration-file
Other great God's blog address http://my.oschina.net/oscfox/blog/217798
Http://www.kissthink.com/archive/c-e-p-h-2.html and so on a case of a.
Overall on the single-node configuration did not encounter any air crashes, but mult
, so as to simplify deployment and O M while meeting the needs of different applications. In Ceph systems, "distributed" means that the system has a truly decentralized structure and has no theoretical limit for scalability.
The first three articles are about the background. Starting from the fourth article, Zhang Yu introduced the Ceph structure.
The core of Ceph
Ceph Cluster Expansion and Ceph Cluster ExpansionThe previous article describes how to create a cluster with the following structure. This article describes how to expand the cluster.
IP
Hostname
Description
192.168.40.106
Dataprovider
Deployment Management Node
192.168.40.107
Mdsnode
MON Node
192.168.40.108
Osdnode1
OSD Node
192.168.40.14
1. Environment and descriptionDeploy ceph-0.87 on ubuntu14.04 server, set Rbdmap to mount/unload RBD block devices automatically, and export the RBD block of iSCSI with a TGT with RBD support. 2. Installing Ceph1) Configure hostname with no password login [Email protected]:/etc/ceph# cat/etc/hosts127.0.0.1localhost192.168.108.4 osd2.osd2osd2192.168.108.3 osd1.osd1osd1192.168.108.2 Mon0.mon0mon0 #示例如下ss
If you want to reprint please indicate the author, original address: http://xiaoquqi.github.io/blog/2015/06/28/ceph-performance-optimization-summary/I've been busy with the optimization and testing of ceph storage, and have looked at various materials, but it seems that there is not an article to explain the methodology, so I would like to summarize here, a lot of content is not my original, but to do a sum
object from the replicated pool to the erasure-coded pool) ; Of course it can be adjusted to the contrary;5, erasure-coded pool for cold data design, suitable for slow hardware equipment, access to less data; replicated pool is designed for fast devices and fast access.3.4.2 inexpensive multi-data center storage10 dedicated network-linked data centers, each with the same size of storage space, but no power backup and no air cooling system.Create such
Ceph environment setup (2)1. There are three layout hosts: node1, node2, and node3. Each host has three osdks, as shown in figure. osd1, and 8 are SSD disks, 4 is the SATA disk. Each of the three hosts has a Monitor and an MDS. We use osd1, 3, and 4 to create a pool named ssd, in the form of three copies, osd0, and 2, 4 to build a Pool named sata, in the form of Erasure code, k = 2, m = 1, that is, use two osdks to store data fragments, one osd to sto
As an architect in the storage industry, I have a special liking for file systems. These systems are used to store the user interfaces of the system. Although they tend to provide a series of similar functions, they can also provide significantly different functions. CEpH is no exception. It also provides some of the most interesting features you can find in the file system.
CEpH was initially a PhD resea
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.