Before starting the text, I would like to thank Unitedstack Engineer Zhu Rongze for his great help and careful advice on this blog post. This paper makes a more explicit analysis and elaboration on the calculation method of Ceph reliability (https://www.ustack.com/blog/build-block-storage-service/) for Unitedstack company at the Paris summit. For the interests of this topic friends to discuss, research, the article if there is inappropriate, but also
Ceph practice summary: configuration of the RBD block device client in Centos
Before proceeding to this chapter, you need to complete the basic cluster building, please refer to the http://blog.csdn.net/eric_sunah/article/details/40862215
Ceph Block devices are also calledRBDOrRADOS. Block Device
During the experiment, a virtual machine can be used as the ceph-c
First, Cosbench installationCosbench is the Intel Team's test tool for cloud storage based on Java development, the full name is Cloud object Storage BenchUnder the groove, it seems that this set of tools is developed by the Intel Shanghai team, there is no Chinese-related information.Like all performance testing tools, the Cosbench also points to the driver of the console and the originating request, and driver can be distributed. Supports Swift, S3, OpenSt
According to http://ovirt-china.org/mediawiki/index.php/%E5% AE %89%E8%A3%85%E9%83%A8%E7%BD%B2Ceph_Calamari
The original article is as follows:
Calamari is a tool for managing and monitoring CEpH clusters and provides rest APIs.
The recommended deployment platform is ubuntu. This article uses centos 6.5.Installation and deployment
Obtain calamari code# git clone https://github.com/ceph/calamari.git# g
First of all, I have to say sorry, before the environmental damage, has no machine to test, so the previous article to the third end has not found the time and environment to continue testing, here is a brief talk about fuel network.The most complex deployment of OpenStack should be part of the network, fuel simplifies the deployment of OpenStack while the network type is also confusing for beginners, let m
In planning the CEPH distributed storage cluster environment, the choice of hardware is very important, this is related to the performance of the entire Ceph cluster, the following comb to some of the hardware selection criteria, for reference:1) CPU SelectionCeph metadata server will dynamically redistribute the load, which is CPU sensitive, so metadata server should have better processor performance (such
1. Disable the CEpH OSD process.
Service CEpH stop OSD
2. Balance the data in the CEpH Cluster
3. Delete the OSD node when all PG balances are active + clean.
CEpH cluster status before deletion
[[Email protected] ~] # CEpH OSD tree
# ID weight type name up/down reweight
-1
environment on its own. OpenStack has a component called Cinder (to provide a block storage service), but OpenStack does not have the ability to store and read data, it relies on the support of an actual block storage device, which can be a distributed storage system, such as Ceph, It can also be a storage device, such as an EMC SAN or a local hard disk on a sto
virtual machines during the restart process, and the problem arises mainly because of the evacuate cleanup mechanism. This bug has been fixed in the L version.The ease of use of d.openstack is not good enough.With fuel, OpenStack can be installed quickly, but many configuration operations also require a command line, and a key delivery distance from the automated deployment. Again, for example, the more extensive
As a core tutorial for OpenStack, we have come to the final summary.
OpenStack currently has dozens of modules, this tutorial discusses the most important core modules: Keystone,nova,glance,cinder and neutron. Please look at the following figure:
This figure is truncated from https://www.openstack.org/software/project-navigator/, which is the official 6 Core Service defined by
1. RBD LS View the image of the Ceph default resource pool RBD2.RBD Info xxx.img View xxx.img specific information3.RBD RM xxx.img Delete xxx.img4.RBD cp aaa.img bbb.img copy image aaa.img to Bbb.img5.RBD rename aaa.img bbb.img rename aaa.img to Bbb.img6.RBD Import aaa.img The local aaa.img into the Ceph cluster7.RBD Export aaa.img aaa.img the Ceph cluster to a l
0. IntroductionThis article describes how to configure the cache pool tiering. The role of the cache pool is to provide a scalable cache for caching Ceph hotspot data or for direct use as a high-speed pool. How to create a cache pool: First make a virtual bucket tree from an SSD disk,Then create a cache pool, set its crush mapping rule and related configuration, and finally associate the pool to the cache pool that you need to use.1. Build SSD bucket
This section reads as follows:Increase Monitoring NodeAdding OSD NodesRemove the OSD node1: Increase monitoring nodeHere we use the last environment, to increase the monitoring node is very simpleLet's get the Monitoring node environment ready: Change the Hosts file and hostname, and update the hosts file for the Deploy nodeOn the Deployment nodeCD first-ceph/Ceph-deploy new Mon2 Mon3//here refers only to w
Tag: Ceph Storage Umount ErrorPhenomenon: [Email protected]:~# Umount/mnt/ceph-zhangboUmount:/mnt/ceph-zhangbo: The device is busy.(In some cases lsof (8) or fuser (1)) can be foundUseful information about processes that use the deviceWorkaround:1, according to the above tips, we use Fuser to check the use of the situation[Email protected]:~# fuser-m/mnt/
As Ceph is increasingly used in various storage business processes, its performance and tuning strategy has become a topic for users to pay close attention to, one of the key factors affecting performance is the OSD storage engine implementation; The Ceph base component Rados is a strong consistent, object storage System, The storage engines supported by its OSD are as follows:The ObjectStore layer encapsul
Edit Crush Map:1, obtain crush map;2, anti-compilation crush map;3. Edit at least one device, bucket, rule;4, recompile crush map;5, re-inject crush map;Get Crush MapTo get the crush map of the cluster, execute the command:Ceph OSD Getcrushmap-o {Compiled-crushmap-filename}Ceph will crush output (-O) to the file you specify, and because crush map is compiled, it needs to be recompiled;Anti-compilation Crush mapTo decompile the crush map, execute the c
Crush Algorithm1, the purpose of crushOptimize allocation data, efficiently reorganize data, flexibly constrain object copy placement, maximize data security when hardware fails2. ProcessIn the Ceph architecture, the Ceph client is directly read and written to the Rados Object stored on the OSD, so ceph needs to go through (pool, Object) → (pool, PG) →osd set→osd
Ceph-dokan compiled using the following is compiled on the Win7 64-bit machine, running 1. Download the source code, compile can refer to the inside of the readme.md
Https://github.com/ketor/ceph-dokan 2. Download TDM-GCC and install, select 32 bit (default) when installing
Https://sourceforge.net/projects/tdm-gcc/files/TDM-GCC%20Installer/tdm-gcc-5.1.0-3.exe/download 3. Download and install Dokan, select v
1. Environmental descriptionWith Kolla deployed Ceph, since the OSD 0 occupies the SATA 0 channel, the system disk needs to be swapped with the OSD 0 jumper, after the jumper switch, the OSD 0 does not start properly.2, Reason analysis:Before switching jumpers, the device file of the OSD 0 is/dev/sda2, the switch jumper becomes/dev/sdc2,osd at startup,--osd-journal/dev/sda2, specifies the log device, because the log partition device name becomes/DEV/S
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.