ceph

Read about ceph, The latest news, videos, and discussion topics about ceph from alibabacloud.com

Ceph Introduction of RBD Implementation principle __ceph

RBD is a block device provided by Ceph, this article will briefly introduce its implementation principle. Ceph official documentation tells us that Ceph is essentially an object store. It is also understood that ceph block storage is actually handled by several objects at the client. In other words, for

Ceph Calamari Installation (Ubuntu14.04)

1. Overview The entire deployment architecture of calamari can be simplified to the following illustration, including the client and calamari systems. The calamari system consists of the calamari server and the agents running on the Ceph cluster. The agent keeps sending data to the Calamari server, which stores the data in the database. The client can connect the Calamari server through the HTTP protocol and show the State and information of the

Installation Calamari detailed steps in Ceph admin-node

# # # #ceph系统 # #1. Linux version: Centos Linux release 7.1.1503 2, kernel version: Linux version 3.10.0-229.20.1.el7.x86_64 # # # #前期准备 # #1, a complete Ceph platform (including Admin-node, Monitor, OSD). # # # #在admin-node shut down the firewall, selinux####1. Turn off the firewall. #systemctl Stop Firewalld #systemctl disable FIREWALLD 2, turn off SELinux. #setenforce 0 #vim/etc/selinux/config selinu

Ceph Translations Rados:a Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters

writes commands, although they have the potential to encapsulate obvious intelligence. When the storage cluster grows to thousands of nodes or more, the consistency management of data migration, error detection, error recovery will bring great pressure to the client, controller, metadata directory node, and limit the scalability.We have designed and implemented Rados, a reliable, automated distributed object store that seeks to distribute device intelligence to complex thousands of-node-scale c

Ceph Cache Tier

Cachetier is a ceph server-side caching scheme, simply add a layer of cache layer, the client directly with the cache layer to deal with , improve access speed, the backend has a storage layer, Actually store large amounts of data. The principle of tiered storage is that the access to the stored data is hot, and the data is not evenly accessed. There is a general rule called the 28 principle, that is, 80% 's application only accesses 20% data, this 20

Ceph Basic Operations Finishing

One, Ceph replacement drive process:1. Delete OSD:A, stop the OSD daemonStop Ceph-osd Id=xB, Mark OSD outCeph OSD out OSD. XC, OSD Remove from CrushmapCeph OSD Remove OSD. XD, Delete ceph anthentication keysCeph Auth del osd. XE, remove OSD from Ceph clusterCeph OSD RM OSD. X2, add OSD (warning: Add after deletion, OSD

Ceph configuration parameters (ii)

ceph configuration parameters (i)6. Keyvaluestore CONFIG REFERENCEHttp://ceph.com/docs/master/rados/configuration/keyvaluestore-config-ref/KeyValueStore is an alternative OSD Backend compared to Filestore. Currently, it uses LevelDB as backend. Keyvaluestore doesn ' t need journal device. Each operation would flush into the backend directly. Back end (LEVELDB) used by the Keyvaluestore:keyvaluestore Backend (1) Queue Maximum num

Ceph configuration parameters (2)

Ceph configuration parameters (2)Ceph configuration parameters (a) 6, keyvaluestore config reference http://ceph.com/docs/master/rados/configuration/keyvaluestore-config-ref/ KeyValueStore is an alternative OSD backend compared to FileStore. currently, it uses LevelDB as backend. keyValueStore doesn' t need journal device. each operation will flush into the backend directly. the backend (leveldb) used by Ke

CEPH Cache tiering

The basic idea of the cache tiering is the separation of hot and cold data, the use of relatively fast/expensive storage devices such as SSD disks to form a pool as the cache layer, the backend with relatively slow/inexpensive devices to form a cold data storage pool.The Ceph cache tiering Agent handles automatic migration of data from the cache layer and storage layer, transparently to client transparent operations. The Cahe layer has two typical mod

Ceph OSD Batch Creation

Have no time to write on business trip ...Create 150 OSD Today, find manual write ceph.conf a bit big, researched the increment function of vim.Very simple is a command:: Let I=0|g/reg/s//\=i/|let i=i+1It can match the Reg in your text and then follow your i+n, increasing +n per passThe function of the above command is to find the Reg character in the text, then replace it with 0 from the first, then +1So in the ceph.conf, we can first copy out 150 [OSD.GGGG], and then in the use of the above co

Kubernetes pod cannot mount a temporary workaround for Ceph RBD storage volumes

This is a creation in Article, where the information may have evolved or changed. All the places involved in storage are very prone to "pits", Kubernetes is no exception. First, the cause of the problem The problem began yesterday by upgrading the operation of a stateful service. The pod under the service is mounted with a persistent Volume provided with Ceph RBD. The pod is deployed with normal deployment and does not use the Petset in Alpha state. T

K8s uses CEpH for persistent Storage

I. OverviewCephfs is a CEpH cluster-based file system that is compatible with POSIX standards.When creating a cephfs file system, you must add the MDS service to the CEpH cluster. This service processes the metadata part in the POSIX file system, and the actual data part is processed by the osds in the CEpH cluster.Cephfs supports loading by using INCORE modules

Ceph file system installation,

Ceph file system installation, Yum install-y wgetwget https://pypi.python.org/packages/source/p/pip/pip-1.5.6.tar.gz#md5=01026f87978932060cc86c1dc527903etar zxvf pip-1.5.6.tar.gzcd pip-1.5.6python setup. py buildpython setup. py installssh-keygen ################################## echo" ceph-admin ">/etc/hostname # echo" ceph-node1 ">/etc/hostname # echo"

RBD mounting steps for Kubernetes ceph

K8s Cluster Install the client on each of the above:Ceph-deploy Install k8s IP addressCreate a k8s action user :Ceph auth Add client.k8s mon ' Allow rwx ' OSD ' Allow rwx 'Ceph auth get client.k8s-o/etc/ceph/ceph.client.k8s.keyring #导出新建用户的钥匙 to place the exported keys under the/etc/ceph/of each k8sCeph Auth List #查看权限

Problems with Ceph Crush

Ceph crush the question to read over and over again, the relevant chapters of the CEPH source analysis book are summarized as follows:4.2.1 Hierarchical Cluster MapExample 4-1 Cluster map definitionHierarchical cluster map defines the static topology of the OSD cluster with hierarchical relationships. The level of the OSD enables the crush algorithm to realize the ability of the rack-aware (rack-awareness)

Ceph configuration parameters (1)

Ceph configuration parameters (1)1. POOL, PG AND CRUSH CONFIG REFERENCEConfiguration segment: [global] format: osd pool default pg num = 250 maximum pg count per storage pool: number of seconds between creation of pg in the same OSD Daemon in the mon max pool PG num: how many seconds does the mon pg create interval wait PG can be considered to be the master: mon pg stuck thresholdCeph OSD Daemon on PG flag bits: osd pg bitsCeph OSD Daemon PGP bits: os

Deploy Ceph manually

1. Manually format each disk, such as/DEV/SDB1 for data partitioning and/DEV/SDB2 for log partitioning. 2. Mkallxfs 3. Modify the/etc/ceph/ceph.conf file: [Global]authsupported=noneosdpooldefaultsize=2osdcrush chooseleaftype=0objecter_inflight_op_bytes=4294967296objecter_inflight_ops=1024#debug filestore=100#debugosd=10debugjournal=1filestore blackhole=falsefilestorequeuemaxops=1024filestorequeuemaxbytes= 1073741824filestoremaxsyncinterval=5#osdopnumt

Ceph Paxos Related Code parsing

Tp, ready to restart the election process after the timer has been triggered. ??The Leader elections in Ceph can be divided into three steps: proposer proposal, send propose message to all monitor node; Span style= "color:black; Font-size:10pt ">monitor node receives the message, accepts or rejects propose proposer receive ack message, count the number of supporters based on quantity, if more than ha

VSM (Virtual Storage Manager for Ceph) installation tutorial

Reprint annotated source, Chen Trot http://www.cnblogs.com/chenxianpao/p/5770271.htmlFirst, installation environmentos:centos7.2vsm:v2.1 releasedSecond, installation instructionsThe VSM system has two roles, one is Vsm-controller and the other is vsm-agent. The vsm-agent is deployed on a ceph node, and the Vsm-controller is deployed on a separate node. Vsm-controller should also be deployed on CEPH nodes wi

CEpH simple operation

In the previous article, we introduced the use of CEpH-deploy to deploy the CEpH cluster. Next we will briefly introduce the CEpH operations. Block device usage (RBD)A. Create a user ID and a keyringCEpH auth get-or-create client. node01 OSD 'Allow * 'mon 'Allow * '> node01.keyringB. Copy the keyring to node01.SCP node01.keyring [email protected]:/root

Total Pages: 15 1 .... 7 8 9 10 11 .... 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.