Today, Ceph is configured, referencing the official document address of the multiparty document http://docs.ceph.com/docs/master/rados/configuration/ceph-conf/#the-configuration-file
Other great God's blog address http://my.oschina.net/oscfox/blog/217798
Http://www.kissthink.com/archive/c-e-p-h-2.html and so on a case
be changed on the MDS side:[GLOBAL]FSID = 3734cac3-4553-4c39-89ce-e64accd5a043mon_initial_members = CEPH-OSD1, Ceph-osd2mon_host =192.168.2.242,192.168.2.243auth_cluster_required = cephxauth_service_required = cephxauth_client_required = Cephxfilestore_xattr_use_omap = Trueosd pool Default size = 2public Network = 192.168.2.0/24Then issued the configuration and key: Ce
First, pre-installation preparation
1.1 Introduction to installation Environment
It is recommended to install a Ceph-deploy management node and a three-node Ceph storage cluster to learn ceph, as shown in the figure.
I installed the Ceph-deploy on the Node1.
First three machines were prepared, the names of which wer
Document directory
1. Design a CEpH Cluster
3. Configure the CEpH Cluster
4. Enable CEpH to work
5. Problems Encountered during setup
Appendix 1 modify hostname
Appendix 2 password-less SSH access
CEpH is a relatively new Distributed File System completed by the USSC storage team. It is a Network File System
= agent01The following actions are done on the new OSD nodeGenerate keyCeph-authtool--create-keyring--gen-key-n Mds.1/etc/ceph/keyring.mds.1Join the CertificationCeph auth Add mds.1 osd ' Allow * ' mon ' Allow rwx ' mds ' Allow '-i/etc/ceph/keyring.mds.1Start new MDS/etc/init.d/ceph-a Start Mds.1Add MonAdd agent01 MDS to NodeAdd the following configuration to th
to the nodeEcho '10. 57.1.111 ceph-mon1 '>/etc/hostsAdd the following configuration to the configuration file and synchronize it to the node.[Mon.1]Host = ceph-mon1Mon ADDR = 10.57.1.111: 6789Export key and Mon MapMkdir/tmp/CEpHCEpH auth get mon.-O/tmp/CEpH/keyring. MonCEpH mon getmap-O/tmp/CEpH/monmapInitialize A New
Deployment Installation
Regarding the installation ceph the whole process encounters the question, as well as the reliable solution, the personal test is effective, does not represent the general colleague's viewpoint.I am using the server, so I did not engage in any user issues. The machine is centOS7.3. The Ceph version I installed is jewel and currently uses only 3 nodes.Node IP naming role10.0.1.92 e10
Ceph provides three storage methods: Object Storage, block storage, and file system. The following figure shows the architecture of the Ceph storage cluster:We are mainly concerned about block storage. In the second half of the year, we will gradually transition the virtual machine backend storage from SAN to Ceph. although it is still version 0.94,
In the previous blog post, we completed the Sonarqube deployment through Kubernetes's devlopment and service. Seems to be available, but there is still a big problem. We know that databases like MySQL need to keep data and not lose data. And the container is exactly the moment you exit, all data is lost. Once our Mysql-sonar container is restarted, any subsequent settings we make to Sonarqube will be lost. So we have to find a way to keep the MySQL da
This is a creation in
Article, where the information may have evolved or changed.
In the article "using Ceph RBD to provide storage volumes for kubernetes clusters," we learned that one step in the integration process for kubernetes and Ceph is to manually create the RBD image below the Ceph OSD pool. We need to find a way to remove this manual step. The first th
device depends on the file system, and Ceph includes the client and server-side object correctness check can only rely more on the Read Verify mechanism, in the case of data migration need to synchronize the comparison of different copy object information to ensure correctness. The current asynchronous mode allows the possibility of the error data to be returned during the period.
Reference Documentation:The test in this article is referenced in
Ceph Client:Most ceph users do not store objects directly into the Ceph storage cluster, and they typically choose one or more of the Ceph block devices, the Ceph file system, and the Ceph object storage;Block device:To practice t
Ceph monitoring Ceph-dash Installation
There are a lot of Ceph monitors, such as calamari or inkscope. When I started to try to install these, they all failed, and then Ceph-dash came into my eyes, according to the official description of Ceph-dash, I personally think it is
Ceph Cluster Expansion and Ceph Cluster ExpansionThe previous article describes how to create a cluster with the following structure. This article describes how to expand the cluster.
IP
Hostname
Description
192.168.40.106
Dataprovider
Deployment Management Node
192.168.40.107
Mdsnode
MON Node
192.168.40.108
Osdnode1
OSD Node
192.168.40.14
protected] ~]# umount/var/lib/ceph/osd/ceph-113. Remove MDS1. Directly close the MDS process for this node[[email protected] ~]#/etc/init.d/ceph stop MDS= = = Mds.bgw-os-node153 = =Stopping Ceph mds.bgw-os-node153 on Bgw-os-node153...kill 4981...done[Email protected] ~]#2. Remove this MDS certification[Email protected
Ceph file system getting started,
Zhang Yu (@ Yi Ling Yan), an open-source technical expert, shared Ceph at the C3 salon and recently wrote a series of blog posts about Ceph Analysis in one breath. There are 8 articles in total:
One of the "Ceph analysis" series -- Preface
" [Emailprotected]:/home/user1/cephfs#cd[email protected]:~#umount/home/user1/cephfs== "boot using kernel load" 172.16.66.142:6789 IP of any Mon node, port/home/user1/cephfs mount point fuse.ceph hanging on type name=admin in/etc/ceph/ ceph.client.admin.keyring See adminconf ceph configuration file for cluster VIMNBSP;/ETC/FSTABID=ADMIN,CONF=/ETC/CEPH/CEPH.CONFN
1. Environment and descriptionDeploy ceph-0.87 on ubuntu14.04 server, set Rbdmap to mount/unload RBD block devices automatically, and export the RBD block of iSCSI with a TGT with RBD support. 2. Installing Ceph1) Configure hostname with no password login [Email protected]:/etc/ceph# cat/etc/hosts127.0.0.1localhost192.168.108.4 osd2.osd2osd2192.168.108.3 osd1.osd1osd1192.168.108.2 Mon0.mon0mon0 #示例如下ss
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.