Ceph provides three storage methods: Object Storage, block storage, and file system. The following figure shows the architecture of the Ceph storage cluster:We are mainly concerned about block storage. In the second half of the year, we will gradually transition the virtual machine backend storage from SAN to Ceph. alt
list (runtime) of the cluster, which allows the node to be used when other nodes start to start[[email protected] ceph]# ceph mon add such as: [[email protected] ceph] #ceph Mon add osd1 192.168.2.21:67896. Start the new monitor and it will automatically join the machine. T
Deployment Installation
Regarding the installation ceph the whole process encounters the question, as well as the reliable solution, the personal test is effective, does not represent the general colleague's viewpoint.I am using the server, so I did not engage in any user issues. The machine is centOS7.3. The Ceph version I installed is jewel and currently uses
can skip it. I have not installed this file, or I will not write this article... If the installation fails, follow these steps.
Because ceph-dash is written in python, it is missing some additional ceph software package: Flask when I fail to succeed. After installing Flask, it should be okay to run ceph-dash again, if you are still not OK, then I can't help it,
First, pre-installation preparation
1.1 Introduction to installation Environment
It is recommended to install a Ceph-deploy management node and a three-node Ceph storage cluster to learn ceph, as shown in the figure.
I installed the Ceph-deploy on the Node1.
First three machines were prepared, the names of which wer
/SDF1 MON0:/CEPHMP2:/DEV/SDF2 osd1:/cephmp1:/dev/sdf1 osd1:/cephmp2:/dev/sdf2 osd2:/cephmp1:/dev/sde1 Osd2:/cephmp2:/dev /sde2ceph-deploy MDS Create Mon0 OSD1 OSD2Once installed, you can modify the/etc/ceph/ceph.conf file as needed, and then use the Ceph-deploy--overwrite-conf config push osd1 osd2 command to push the modified configuration file to another host. Then restart with the following command:Resta
Document directory
1. Design a CEpH Cluster
3. Configure the CEpH Cluster
4. Enable CEpH to work
5. Problems Encountered during setup
Appendix 1 modify hostname
Appendix 2 password-less SSH access
CEpH is a relatively new Distributed File System completed by the USSC storage team. It is a Network File System
Ceph installation and deployment in CentOS7 Environment
Ceph Introduction
Eph is designed to build high performance, high scalibility, and high available on low-cost storage media, providing unified storage, file-based storage, block storage, and Object Storage. I recently read the relevant documentation and found it interesting. It has already provided Block Storage for openstack, which fits the mainstream
their software. In this process, they also use a variety of different tools to build and manage their environment. I wouldn't be surprised if I saw someone using Kubernetes as a management tool.Some people like to apply the latest technology to production, otherwise they will feel the work is boring. So when they see that their favorite open source storage solutions are being containerized, they will be happy with the way they are "all containerized."Unlike traditional Yum or apt-get, container
This is a creation in
Article, where the information may have evolved or changed.
In the article "using Ceph RBD to provide storage volumes for kubernetes clusters," we learned that one step in the integration process for kubernetes and Ceph is to manually create the RBD image below the Ceph OSD pool. We need to find a way to remove this manual step. The first th
Reprint please indicate the origin of the http://www.cnblogs.com/chenxianpao/p/5878159.html trotThis article only combed the general process, the details of the part has not been too understanding, there is time to see, and then add, there are errors please correct me, thank you.One of the main features of Ceph is strong consistency, which mainly refers to end-to-end consistency. As we all know, the traditional end-to-end solution is based on the data
Ceph Client:Most ceph users do not store objects directly into the Ceph storage cluster, and they typically choose one or more of the Ceph block devices, the Ceph file system, and the Ceph object storage;Block device:To practice t
Today, Ceph is configured, referencing the official document address of the multiparty document http://docs.ceph.com/docs/master/rados/configuration/ceph-conf/#the-configuration-file
Other great God's blog address http://my.oschina.net/oscfox/blog/217798
Http://www.kissthink.com/archive/c-e-p-h-2.html and so on a case of a.
Overall on the single-node configuration did not encounter any air crashes, but mult
Ceph Cluster Expansion and Ceph Cluster ExpansionThe previous article describes how to create a cluster with the following structure. This article describes how to expand the cluster.
IP
Hostname
Description
192.168.40.106
Dataprovider
Deployment Management Node
192.168.40.107
Mdsnode
MON Node
192.168.40.108
Osdnode1
OSD Node
192.168.40.14
/ceph.client.admin.keyring See adminsecretfile with aqc8yihw2gslebaawum3nqi6h8x0veciakld1w== The file vim/etc/fstab172.16.66.142:6789://mnt/mycephfscephname=admin,secretfile=/etc/ceph/admin.secret0 2 Restart the machine, you can see that the [Emailprotected]:~#df-ht file system has been identified type capacity used available used% mount point 172.16.66.142:6789:/ceph2.9t 195g2.7t7%/mnt/mycephfs"Moun
, so as to simplify deployment and O M while meeting the needs of different applications. In Ceph systems, "distributed" means that the system has a truly decentralized structure and has no theoretical limit for scalability.
The first three articles are about the background. Starting from the fourth article, Zhang Yu introduced the Ceph structure.
The core of Ceph
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.