del Osd.3//Removed from authenticationCeph OSD RM 3//Delete5.5. Copy the configuration file and admin key to each node so that you do not need to specify the monitor address and ceph.client.admin.keyring each time you execute the Ceph command lineCeph-deploy Admin admin node1 Node2 node35.6. View Cluster health statusCeph Health6. Configure the block device (client node)6.1. Create an imageRBD create foo--
1. Current status
2. Add a Mon (mon.node2) SSH node2 to 172.10.2.172 (Node2)
vim/etc/ceph/ceph.conf Add Mon.node2 related configuration
Ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring
Monmaptool--create--add node1 172.10.2.172--fsid
Mkdir-p/var/lib/ceph/mon/
objects based on the cluster map. This map is replicated to all nodes (the storage and client nodes) and is updated by the lazy propagation increments.By providing storage nodes with complete data distribution information in the system, the device is able to self-manage data replication, consistent and secure process updates, participate in error detection, respond to errors, and data distribution changes resulting from data object replication migrat
The Oepnstack Ceph series is a collection of notes based on Ceph Cookbook, divided into the following sections:1. "Ceph profile"2. "Ceph cluster Operations"3. "Ceph block Device Management and OpenStack configuration"4. "In-depth
, which divides the files into 2m~4m object storage into Rados, which is supported for small files and large files.
Ceph has two important daemon processes: Osds and Monitors.
OSD (Object Storage Device): The process responsible for responding to client requests to return specific data. A ceph cluster typically has a number of OSD, which supports automatic back
problem emerged: Error: expat. h: no such file or directory.
Error: 'xml _ parser 'does not name a type
In this case, the package expat-devel is missing to execute Yum install expat-devel.
OK. At this time, the compilation is passed. It is not easy.
Then execute make install.
No problem. You can configure the CEpH. conf file to form a CEpH small cluster for test
based on the high-speed configuration of the Ceph storage cluster environment, it is possible to do related object operations:1. Set the OSD pool Min_sizefirst look at the pool with the Rados command such as the following:#rados LspoolsDatametadataRBDThe default OSD number of Min_size is configured to 2, where an OSD example is required to set it to 1ceph OSD Pool get {pool-name} {key}
Application and fault handling of CEPH in KVM virtualizationIn a distributed cluster, the user is provided with object storage, block storage and file storage.Benefits: Unified StorageNo single point of failureData multi-split redundancyScalable storage CapacityAutomatic fault tolerance and fault self-healingThree major role components of ceph and their rolerepre
collocated group is in the clean state, the primary OSD and the replica OSD are successfully interconnected, and there is no deviated collocated group. Ceph has copied the specified number of objects in the collocated group.5. DegradedWhen the client writes data to the primary OSD, the master OSD is responsible for writing the copy to the remaining copy OSD. After the main OSD writes the object to the copy OSD, the main OSD will remain in the degrade
In planning the CEPH distributed storage cluster environment, the choice of hardware is very important, this is related to the performance of the entire Ceph cluster, the following comb to some of the hardware selection criteria, for reference:1) CPU SelectionCeph metadata server will dynamically redistribute the load,
cluster, it is obvious that when the three OSD corresponding to the physical hard disk all damaged, the data must not be restored. Therefore, the reliability of the cluster is directly related to the reliability of the hard disk itself. Let's assume that a larger ceph environment, 30 OSD nodes, 3 racks, each rack has 10 OSD nodes, each OSD node still correspo
Ceph RADOSGW Object Storage interface, research configuration for a long time, now share the following. The prerequisite for configuring RADOSGW first is that you have successfully configured the Ceph cluster to view the Ceph cluster through
1. Disable the CEpH OSD process.
Service CEpH stop OSD
2. Balance the data in the CEpH Cluster
3. Delete the OSD node when all PG balances are active + clean.
CEpH cluster status before deletion
[[Email protected] ~] #
Ceph practice summary: configuration of the RBD block device client in Centos
Before proceeding to this chapter, you need to complete the basic cluster building, please refer to the http://blog.csdn.net/eric_sunah/article/details/40862215
Ceph Block devices are also calledRBDOrRADOS. Block Device
During the experiment, a virtual machine can be used as the
The pool is a logical partition of Ceph's stored data, and it acts as a namespace. Other distributed storage systems, such as MogileFS, Couchbase, and Swift, have the concept of pool, but they are called different. Each pool contains a certain number of objects in the PG,PG are mapped to different OSD, so pool is distributed throughout the cluster.
Apart from isolating data, we can also set different optimization strategies for different pool, such as
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.