Block device installation, create, map, mount, details, adjust, uninstall, curve map, deleteMake sure your ceph storage cluster is active + clean before working with ceph block devices.vim/etc/hosts172.16.66.144 ceph-clientPerform this quick boot on the admin node.1. On the admin node, install
This is a creation in
Article, where the information may have evolved or changed.
In the article "using Ceph RBD to provide storage volumes for kubernetes clusters", we mentioned that: with the integration of Kubernetes and Ceph, kubernetes can use Ceph RBD to provide persistent Volume for pods within a cluster. Howeve
About CephWhether you want to provide Ceph object storage and/or Ceph block devices for the cloud platform, or if you want to deploy a Ceph file system or use Ceph as his, all of the Ceph storage cluster deployments start with dep
functions of the file system (you can add a processing module on the client and server). Although the server and client code are a piece of code, the code is clear on the whole, less code.Ceph is developed using C ++, and the system itself has multiple processes. Multiple processes constitute a large cluster, and there are also small clusters in the cluster. Compared with Glusterfs, the code is much more c
apt-get install Ceph-deploy5. Create a working directory, enter and create a cluster mkdir ceph-cluster cd ceph-cluster ceph-deploy new Monster//Create a fresh
the server side, client code is a piece of code, but overall the code is clear, the code is small.
While Ceph uses C + + development, and the system itself has a number of processes, multiple processes constitute a large cluster, and there are small clusters within the cluster, relative to Glusterfs, the code is much more complex, and
Ceph monitoring Ceph-dash Installation
There are a lot of Ceph monitors, such as calamari or inkscope. When I started to try to install these, they all failed, and then Ceph-dash came into my eyes, according to the official description of Ceph-dash, I personally think it is
does not have high-level concepts such as accounts and containers. On the other hand, librados API opens a large number of RADOS status information and configuration parameters to developers, allows developers to observe the status of the RADOS System and the objects stored in the system, and to control system storage policies. In other words, by calling the librados API, applications can not only operate on data objects, but also manage and configure the RADOS system. This is unimaginable and
[Email protected]:~# ceph OSD Tree
# id Weight type name up/down reweight
-1 0.05997 Root Default
-2 0.02998 Host Osd0
1 0.009995 Osd.1 up 1
2 0.009995 Osd.2 up 1
3 0.009995 Osd.3 up 1
-3 0.02998 Host OSD1
5 0.009995 Osd.5 up 1
6 0.009995 Osd.6 up 1
7 0.009995 Osd.7 up 1
Storage nodeBefore you go any further, consider this: Ceph is a distributed storage system, regardless of the details
Because OSD writes logs first and then writes data asynchronously, the speed of writing journal is crucial. For more information about how to select the Journal storage medium, see here.
SSD: Intel s3500 GB result:
# fio --filename=/data/fio.dat --size=5G --direct=1 --sync=1 --bs=4k --iodepth=1 --numjobs=32 --thread --rw=write --runtime=120 --group_reporting --time_base --name=test_write write: io=3462.8MB, bw=29547KB/s, iops=7386 , runt=120005msec clat (usec): min=99 , max=51201 , avg=43
nodes that need to use the pool. Send only the configuration file to the Cinder-volume node (the compute node wants to get Ceph cluster information from the Cinder-volume node, so no configuration file is required )
Create Storage pool Volume-pool, remember the name of the pool, both cinder-volume and compute nodes need to specify this pool in the configuration file
process consumes CPU resources during the run, so it is common for each CEPH-OSD process to bind to a CPU core. Of course, if you use EC mode, you may need more CPU resources.The Ceph-mon process does not consume CPU resources very much, so there is no need to reserve excessive CPU resources for the Ceph-mon process.CEPH-MSD is also very CPU intensive, so it nee
1. Environment and descriptionDeploy ceph-0.87 on ubuntu14.04 server, set Rbdmap to mount/unload RBD block devices automatically, and export the RBD block of iSCSI with a TGT with RBD support. 2. Installing Ceph1) Configure hostname with no password login [Email protected]:/etc/ceph# cat/etc/hosts127.0.0.1localhost192.168.108.4 osd2.osd2osd2192.168.108.3 osd1.osd1osd1192.168.108.2 Mon0.mon0mon0 #示例如下ss
This article is also published in the Grand game G-Cloud public number, pasted here, convenient for you to checkCeph, I believe many it friends have heard. Because of the OpenStack ride, ceph fires and gets more and more hot. However, it is not easy to use a good ceph, in the QQ group often hear beginners complain that ceph performance is too bad, not good use. I
Summary of Ceph practice: CephFS client configuration, cephcephfsBecause CephFS is not very stable at present, it may be used in more experiments.
Before proceeding to this chapter, you need to complete the basic cluster building, please refer to the http://blog.csdn.net/eric_sunah/article/details/40862215
You can attach a file system to a VM or on an independent physical machine. do not perform the followi
the ls command cannot be executed, and the input/output error is reported, this error is a file system fault.
So I began to suspect that there was a problem with the file system.
This file system is ceph. Check the ceph log and find that ceph reports a large number of fault logs when a fault occurs:
16:36:28. 493424 osd.0 172.23123123: 6800/96711 9195:
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.