1. Introduction Ceph is a unified, distributed storage system designed for outstanding performance, reliability, and scalability. It also provides three functions for object storage, block storage, and file system storage to simplify deployment and operations while meeting the needs of different applications. 2. Installation Preparation
Note: The following command may appear in character format when pasting the copy, if the command prompt ca
Extended development of ceph management platform CalamariI haven't written logs for nearly half a year. Maybe I am getting lazy. But sometimes writing something can help you accumulate it. Let's record it. I have been familiar with some related work since I joined the company for more than half a year. Currently, I am mainly engaged in the research and development of distributed systems. The current development is mainly at the management level and ha
Ceph performance tuning-Journal and tcmalloc
Recently, a simple performance test has been conducted on Ceph, and it is found that the performance of Journal and the version of tcmalloc have a great impact on the performance.Test Results
# rados -p tmppool -b 4096 bench 120 write -t 32 --run-name test1
Object size
Bw (MB/s)
Lantency (s)
Pool size
Journal
Tcmalloc version
Max thre
[Email protected]:~# ceph OSD Tree
# id Weight type name up/down reweight
-1 0.05997 Root Default
-2 0.02998 Host Osd0
1 0.009995 Osd.1 up 1
2 0.009995 Osd.2 up 1
3 0.009995 Osd.3 up 1
-3 0.02998 Host OSD1
5 0.009995 Osd.5 up 1
6 0.009995 Osd.6 up 1
7 0.009995 Osd.7 up 1
Storage nodeBefore you go any further, consider this: Ceph is a distributed storage system, regardless of the details
Solution to Ceph cluster disk with no available spaceFault description
During use of the OpenStack + Ceph cluster, because the Virtual Machine crashes into a large amount of new data, the cluster disk is quickly consumed, there is no free space, the virtual machine cannot operate, and all operations of the Ceph cluster cannot be performed.
Fault symptom
An erro
If you want to reprint please indicate the author, original address: http://xiaoquqi.github.io/blog/2015/06/28/ceph-performance-optimization-summary/I've been busy with the optimization and testing of ceph storage, and have looked at various materials, but it seems that there is not an article to explain the methodology, so I would like to summarize here, a lot of content is not my original, but to do a sum
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. in order to achieve our goal, we need to modify the crush map. my example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
To revoke strate, please refer to the following picture:
I. Crush Map
Crush is very flexible and topology aware which is extremely useful in our scenario. we are about to create two different root
About CephWhether you want to provide Ceph object storage and/or Ceph block devices for the cloud platform, or if you want to deploy a Ceph file system or use Ceph as his, all of the Ceph storage cluster deployments start with deploying a
1. Ceph integration with OpenStack (cloud-only features available for cloud hosts)
Created: Linhaifeng, Last modified: about 1 minutes ago
To deploy a cinder-volume node. Possible error during deployment (please refer to the official documentation for the deployment process) Error content: 2016-05-25 08:49:54.917 24148 TRACE Cinder runtimeerror:could Not bind to 0.0.0.0:8776 after trying for seconds Problem Analysis: runtim
The simplest ceph.conf configuration is as follows:= 798ed076-8094-429e-9e27-= ceph-192.168. 1.112 = = =192.168. 1.0/2The command is as follows:PS -aux| grep CephOutput on Ceph-admin:Ceph2108 0.2 2.2 873932 43060? Ssl -: - 0: -/usr/bin/ceph-osd-f--cluster Ceph--ID 2--setuser C
This article is also published in the Grand game G-Cloud public number, pasted here, convenient for you to checkCeph, I believe many it friends have heard. Because of the OpenStack ride, ceph fires and gets more and more hot. However, it is not easy to use a good ceph, in the QQ group often hear beginners complain that ceph performance is too bad, not good use. I
[TOC]k8s Mount Ceph RBDk8s Mount Ceph RBD There are two ways, one is the traditional way of PVPVC, which means that the administrator needs to pre-create the relevant PV and PVC, and then the corresponding deployment or replication to mount the PVC use. After k8s 1.4, Kubernetes provides a more convenient way to dynamically create PV, that is, Storageclass. Using Storageclass, you do not have to create a fi
CentOS is the community version of RedHatEnterpriseLinux. centOS fully supports rpm installation. this ceph installation uses the rpm Package for installation. However, although the rpm Package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth. In practice, if the bandwidth is not good, it takes a long time to download and install the tool, this is unacce
1. Installation Environment|-----node1 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks||-----node2 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disksAdmin-----||-----node3 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks||-----Client6789-Port communication is used by default between Ceph Monitors, and 6,800:7,300 ports in this range are used to communicate between OSD by default2. Preparation work (all nodes)2.1. Modify th
1. Current status
2. Add a Mon (mon.node2) SSH node2 to 172.10.2.172 (Node2)
vim/etc/ceph/ceph.conf Add Mon.node2 related configuration
Ceph-authtool/tmp/ceph.mon.keyring--import-keyring/etc/ceph/ceph.client.admin.keyring
Monmaptool--create--add node1 172.10.2.172--fsid
Mkdir-p/var/lib/ceph/mon/
Here will encounter an err, because the jewel version of CEPH requirements journal need to be Ceph:ceph permissions, the error is as follows:
Journalctl-xeu ceph-osd@9.service
0月 09:54:05 k8s-master ceph-osd[2848]: Starting Osd.9 at:/0 osd_data/var/lib/c Eph/osd/ceph-9/var/lib/ce
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.