ceph cluster

Alibabacloud.com offers a wide variety of articles about ceph cluster, easily find your ceph cluster information here online.

Related Tags:

RBD mounting steps for Kubernetes ceph

K8s Cluster Install the client on each of the above:Ceph-deploy Install k8s IP addressCreate a k8s action user :Ceph auth Add client.k8s mon ' Allow rwx ' OSD ' Allow rwx 'Ceph auth get client.k8s-o/etc/ceph/ceph.client.k8s.keyring #导出新建用户的钥匙 to place the exported keys under the/etc/

CEPH Cache tiering

client writes directly to the back-end cold data pool while reading, Ceph reads the data from the backend to the cache layer.This model is suitable for immutable data, such as Photos/videos on Weibo/video, DNA data, X-ray images, etc.Crush algorithm is the core of Ceph cluster, on the basis of deep understanding of crush algorithm, utilize SSD's high performance

Talking about Ceph Erasure code

DirectoryChapter 1th Introduction1.1 Document Description1.2 Reference documentsThe 2nd chapter the concept and principle of erasure code2.1 Concepts2.2 Principle3rd Chapter Introduction of CEPH Erasure code3.1 Ceph Erasure code use3.2 Ceph Erasure code Library3.3 Ceph Erasure code data storage3.3.1 Encoding block read

Ceph Source code parsing: PG Peering

The device exception in the cluster (the addition and deletion of the exception OSD) can result in inconsistent data between the various copies of the PG, and data recovery is required so that all replicas are in a consistent state.First, the OSD Fault and treatment methods:1. The type of malfunction of the OSD: Fault A: A normal OSD because the device is abnormal, resulting in the OSD does not work properly, so that the OSD over the set time will be

Kubernetes pod cannot mount a temporary workaround for Ceph RBD storage volumes

Attach/mount for pod "INDEX-API-3362878852-PZXM8"/"Default". List of unattached/unmounted VOLUMES=[INDEX-API-PV]; Skipping podE0216 13:59:32.696223 1159 pod_workers.go:183] Error syncing pod 7e6c415a-f40c-11e6-ad11-00163e1625a9, skip Ping:timeout expired waiting for volumes-attach/mount for pod "INDEX-API-3362878852-PZXM8"/"Default". List of unattached/unmounted VOLUMES=[INDEX-API-PV] ... With the Kubelet log we can see that the Index-api Pod on node 10.46.181.146 is unable to mount

CEpH: mix SATA and SSD within the same box

The use case is simple, I want to use both SSD disks and SATA disks within the same machine and ultimately create pools pointing to SSD or SATA disks. in order to achieve our goal, we need to modify the crush map. my example has 2 SATA disks and 2 SSD disks on each host and I have 3 hosts in total. To revoke strate, please refer to the following picture: I. Crush Map Crush is very flexible and topology aware which is extremely useful in our scenario. we are about to create two different root

Ceph Paxos Related Code parsing

leader Elections Cephin theLeaderelection is aPaxos?Leaseprocess, withBasicpaxosthe purpose is different. The latter is used to resolve data consistency issues, whilePaxos Leaseis to elect aLeaderBearMonmapsynchronization Tasks, and is responsible for theLeaderafter offline select a newLeader. Cephthe cluster will only have oneMonitoras aLeader, is the current allMonitorinRankThe one with the lowest value. The electoral process will produceLeaderand

Build a ceph Deb installation package

first, compile the Ceph package 1.1. Clone the Ceph code and switch branches git clone--recursive https://github.com/ceph/ceph.git cd ceph git checkout v0.94.3-fNote: Recursive will clone the module together 1.2. Installing dependent Packages ./install-deps.sh ./autogen.sh 1.3. Pre-compilation configuration .

Ceph installation in CentOS (rpm package depends on installation)

Ceph installation in CentOS (rpm package depends on installation) CentOS is a Community version of Red Hat Enterprise Linux. centOS fully supports rpm installation. This ceph installation uses the rpm package for installation. However, although the rpm package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth.

VSM (Virtual Storage Manager for Ceph) installation tutorial

. Enter file in which to save the key (/HOME/CEPHUSER/.SSH/ID_RSA): Enter passphrase (empty for no passphrase): Enter same Passphrase again: [Email protected]:~$ ssh-copy-id vsm-node1 [Email protected]:~$ ssh-copy-id Vsm-node2 [Email protected]:~$ ssh-copy-id vsm-node3 9. Execute after completion./install.sh-u root-v 2.2. The installation process is to download the installation dependent packages on the controller node before copying them to the Agent node installati

CEpH simple operation

In the previous article, we introduced the use of CEpH-deploy to deploy the CEpH cluster. Next we will briefly introduce the CEpH operations. Block device usage (RBD)A. Create a user ID and a keyringCEpH auth get-or-create client. node01 OSD 'Allow * 'mon 'Allow * '> node01.keyringB. Copy the keyring to node

Ceph installation in CentOS (rpm package depends on installation)

CentOS is the community version of RedHatEnterpriseLinux. centOS fully supports rpm installation. this ceph installation uses the rpm Package for installation. However, although the rpm Package is easy to install, it depends too much, it is convenient to use the yum tool for installation, but it is greatly affected by the network bandwidth. In practice, if the bandwidth is not good, it takes a long time to download and install the tool, this is unacce

Ceph Source Code Analysis-keyvaluestore

Keyvaluestore is another storage engine that Ceph supports (the first is Filestore), which is in the Emporer version of Add LevelDB support to Ceph cluster backend store Design Sum At MIT, I put forward and implemented the prototype system, and achieved the docking with ObjectStore in the firely version. is now incorporated into Ceph's Master. Keyvaluestore is a

OSD Error after ceph reboot

Here will encounter an err, because the jewel version of CEPH requirements journal need to be Ceph:ceph permissions, the error is as follows: Journalctl-xeu ceph-osd@9.service 0月 09:54:05 k8s-master ceph-osd[2848]: Starting Osd.9 at:/0 osd_data/var/lib/c Eph/osd/ceph-9/var/lib/ce

CEpH RPM foor rhel6

ceph-0.86-0.el6.x86_64.rpm 09-Oct-2014 10:00 13M ceph-0.87-0.el6.x86_64.rpm 29-Oct-2014 13:38 13M ceph-common-0.86-0.el6.x86_64.rpm 09-Oct-2014 10:00 5.4M ceph-common-0.87-0.el6.x86_64.rpm 29-Oct-2014 13:38 5.4M

The combination of Nova and Ceph

First, Nova and Ceph combine1. Create storage pool pools in Ceph[[Email protected]_10_1_2_230 ~]# ceph OSD Pool Create VMs #创建一个pools, named vms,128 PGPool ' VMS ' created[Email protected]_10_1_2_230 ~]# ceph OSD Lspools #查看pools创建的情况0 rbd,1 images,2 VMs,[Email protected]_10_1_2_230 ~]#

The difference between ceph weight and reweight

Using the Ceph OSD Tree command to view the Ceph cluster, you will find weight and reweight two values Weight weight and disk capacity, general 1T, value is 1.000, 500G is 0.5 It is related to the capacity of the disk and does not change due to reduced disk available space It can be set with the following command Ceph

Some Ideas about CEpH tier

The CEpH experiment environment has been used within the company for a period of time. It is stable to use the Block devices provided by RBD to create virtual machines and allocate blocks to virtual machines. However, most of the current environment configurations are the default CEpH value, but the journal is separated and written to a separate partition. Later, we plan to use

Centos7.1 manual installation of ceph

Centos7.1 manual installation of ceph 1. Prepare the environmentOne centos7.1 hostUpdate yum Source [root@cgsl ]# yum -y update 2. Install the key and add it to the trusted Key List of your system to eliminate security alarms. [root@cgsl ]# sudo rpm --import 'https://download.ceph.com/keys/release.asc' 3. To obtain the RPM Binary Package, you need to add a Ceph library in the/etc/yum. repos. d/directory: Cr

Redhat's Ceph and Inktank code libraries were hacked

Redhat's Ceph and Inktank code libraries were hacked RedHat claims that Ceph community projects and Inktank download websites were hacked last week and some code may be damaged. Last week, RedHat suffered a very unpleasant accident. Both the Ceph community website and the Inktank download website were hacked. The former is the open-source

Total Pages: 15 1 .... 6 7 8 9 10 .... 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.