1. On the management node, go to the directory where you just created the drop profile, and use Ceph-deploy to perform the following stepsmkdir /opt/cluster-/opt/cluster-cephcephnew master1 master2 Master32. Installing Ceph[email protected] ~]# Yum install--downloadonly--downloaddir=/tmp/~]# yum localinstall-c-y--disablerepo=*/ Tmp/ceph/*. RPMConfigure initial
dominates OpenStack. OpenStack users tend to use open source components, 93% of the OpenStack Cloud runs KVM (Kernel Virtual machine hypervisor), and the second manager is QEMU (16%). While VMware is committed to incorporating its own tools into the OpenStack ecosystem, only 8% of users use ESX. open source networks,
Ceph performance tuning-Journal and tcmalloc
Recently, a simple performance test has been conducted on Ceph, and it is found that the performance of Journal and the version of tcmalloc have a great impact on the performance.Test Results
# rados -p tmppool -b 4096 bench 120 write -t 32 --run-name test1
Object size
Bw (MB/s)
Lantency (s)
Pool size
Journal
Tcmalloc version
Max thre
Extended development of ceph management platform Calamari. The extended development of the ceph management platform Calamari has not written logs for nearly half a year. maybe you are getting lazy. However, sometimes writing something can help you accumulate it, and you can record the extended development of the ceph management platform Calamari.
I haven't writte
About CephWhether you want to provide Ceph object storage and/or Ceph block devices for the cloud platform, or if you want to deploy a Ceph file system or use Ceph as his, all of the Ceph storage cluster deployments start with deploying a
The simplest ceph.conf configuration is as follows:= 798ed076-8094-429e-9e27-= ceph-192.168. 1.112 = = =192.168. 1.0/2The command is as follows:PS -aux| grep CephOutput on Ceph-admin:Ceph2108 0.2 2.2 873932 43060? Ssl -: - 0: -/usr/bin/ceph-osd-f--cluster Ceph--ID 2--setuser C
Highly Available
Concept
Level
Chen Ben
How to Achieve
Classification
The ha of OpenStack
virtual machine ha
compare
application level ha,heat ha template
component ha
mysql ha
Three ways-master-slave synchronization, Main and Standby mode
three ways of two
Extended development of ceph management platform CalamariI haven't written logs for nearly half a year. Maybe I am getting lazy. But sometimes writing something can help you accumulate it. Let's record it. I have been familiar with some related work since I joined the company for more than half a year. Currently, I am mainly engaged in the research and development of distributed systems. The current development is mainly at the management level and ha
[Email protected]:~# ceph OSD Tree
# id Weight type name up/down reweight
-1 0.05997 Root Default
-2 0.02998 Host Osd0
1 0.009995 Osd.1 up 1
2 0.009995 Osd.2 up 1
3 0.009995 Osd.3 up 1
-3 0.02998 Host OSD1
5 0.009995 Osd.5 up 1
6 0.009995 Osd.6 up 1
7 0.009995 Osd.7 up 1
Storage nodeBefore you go any further, consider this: Ceph is a distributed storage system, regardless of the details
If you want to reprint please indicate the author, original address: http://xiaoquqi.github.io/blog/2015/06/28/ceph-performance-optimization-summary/I've been busy with the optimization and testing of ceph storage, and have looked at various materials, but it seems that there is not an article to explain the methodology, so I would like to summarize here, a lot of content is not my original, but to do a sum
1, modify the/etc/hosts, so that the host name corresponding to the IP address of the machine (if you choose a loopback address 127.0.0.1 seemingly cannot parse the domain name). Note: The following host name is monster, the reader needs to change it to its own hostname10.10.105.78 monster127.0.0.1 localhost2. Create a directory Ceph and enter3, prepare two block devices (can be hard disk or LVM volume), here we use LVM DD If=/dev/zero of=
Extended development of the Ceph management platform Calamari
Close to the big six months did not write the log, perhaps it is more and more lazy. But sometimes writing and writing can make a deposit, or come back and record it. into adult college half a year, familiar with some related work, currently mainly engaged in the research and development of distributed systems, the current development is mainly to stay in the management level of development
The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. in order to achieve our goal, we need to modify the crush map. my example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total.
To revoke strate, please refer to the following picture:
I. Crush Map
Crush is very flexible and topology aware which is extremely useful in our scenario. we are about to create two different root
Ceph installation in CentOS (rpm package depends on installation)
CentOS is a Community version of Red Hat Enterprise Linux. centOS fully supports rpm installation. This ceph installation uses the rpm package for installation. However, although the rpm package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth.
Ceph deadlock failure under high IO
On a high-performance PC server, ceph is used for VM image storage. In the case of stress testing, all virtual machines on the server cannot be accessed.
Cause:
1. A website service is installed on the virtual machine, and redis is used as the cache server in the website service. When the pressure is high (8000 thousand accesses per second), all the VMS on the host machin
CentOS is the community version of RedHatEnterpriseLinux. centOS fully supports rpm installation. this ceph installation uses the rpm Package for installation. However, although the rpm Package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth. In practice, if the bandwidth is not good, it takes a long time to download and install the tool, this is unacce
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.