1. CentOS7 Installing Ceph

Source: Internet
Author: User
Tags free ssh

1. Installation Environment


|-----node1 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks

|

|-----node2 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks

Admin-----|

|-----node3 (MON,OSD)SDA is the system disk, SDB and SDC are OSD disks

|

|-----Client


6789-Port communication is used by default between Ceph Monitors, and 6,800:7,300 ports in this range are used to communicate between OSD by default


2. Preparation work (all nodes)

2.1. Modify the IP address

Vim/etc/sysconfig/network-scripts/ifcfg-em1

ipaddr=192.168.130.205

netmask=255.255.255.0

gateway=192.168.130.2


2.2. Turn off the firewall

Systemctl Stop Firewalld.service #停止firewall

Systemctl Disable Firewalld.service #禁止firewall开机启动

Firewall-cmd--state #查看防火墙状态


2.3. Modify the Yum source

Cd/etc/yum.repos.d

MV Centos-base.repo CENTOS-BASE.REPO.BK

wget Http://mirrors.163.com/.help/CentOS6-Base-163.repo

Yum Makecache


2.4. Modify Time Zone

Cp/usr/share/zoneinfo/asia/shanghai/etc/localtime

Yum-y Install NTP

Systemctl Enable NTPD

Systemctl Start NTPD

Ntpstat


2.5. Modify the Hosts

Vim/etc/hosts

192.168.130.205 Admin

192.168.130.204 Client

192.168.130.203 Node3

192.168.130.202 Node2

192.168.130.201 Node1


2.6, install Epel Warehouse, add Yum ceph Warehouse, update Software Library

Installing the Epel Warehouse

RPM-VIH http://mirrors.sohu.com/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm

Add a Yum Ceph warehouse

Vim/etc/yum.repos.d/ceph.repo

[Ceph]

Name=ceph Noarch Packages

baseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/x86_64/

Enabled=1

Gpgcheck=1

Type=rpm-md

Gpgkey=http://mirrors.163.com/ceph/keys/release.asc

2.7, install Ceph-deploy,ceph (ceph all nodes are installed, Ceph-deploy only admin node installation)

Yum-y Update && yum-y install--release Hammer ceph Ceph-deploy

3. Allow password-free SSH login (admin node)

3.1, generate SSH key pair, prompt "Enter passphrase", the direct carriage return, password is empty:

Ssh-keygen


3.2. Copy the public key to all nodes

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]

Ssh-copy-id [email protected]


3.3. Verify that you can login without password ssh

SSH Node1

SSH Node2

SSH Node3

SSH Client


4. Create monitor (admin node)

4.1. Create monitor on Node1, Node2, Node3

mkdir Myceph

CD Myceph

Ceph-deploy New Node1 Node2 Node3


4.2. Modify the number of copies of the OSD, add the OSD pool default size = 2 to the end

Vim/etc/ceph.conf

OSD Pool Default size = 2


4.3. Configure the initial monitor (s), and collect all keys

Ceph-deploy Mon create-initial


5. Create OSD (Admin node)

5.1 Enumerating disks

Ceph-deploy Disk List Node1

Ceph-deploy Disk List Node2


5.2. Clean the disk

Ceph-deploy Disk Zap Node1:sdb

Ceph-deploy Disk Zap NODE1:SDC

Ceph-deploy Disk Zap Node2:sdb

Ceph-deploy Disk Zap NODE2:SDC

Ceph-deploy Disk Zap Node3:sdb

Ceph-deploy Disk Zap NODE3:SDC


5.3. Prepare the OSD

Ceph-deploy OSD Prepare Node1:sdb

Ceph-deploy OSD Prepare NODE1:SDC

Ceph-deploy OSD Prepare Node2:sdb

Ceph-deploy OSD Prepare NODE2:SDC

Ceph-deploy OSD Prepare Node3:sdb

Ceph-deploy OSD Prepare NODE3:SDC


Ceph-deploy OSD Activate NODE1:SDB1

Ceph-deploy OSD Activate NODE1:SDC1

Ceph-deploy OSD Activate NODE2:SDB1

Ceph-deploy OSD Activate NODE2:SDC1

Ceph-deploy OSD Activate NODE3:SDB1

Ceph-deploy OSD Activate NODE3:SDC1

5.4. Delete OSD

Ceph OSD out Osd.3

SSH node1 Service ceph stop Osd.3

Ceph OSD Crush Remove Osd.3

Ceph auth del Osd.3//Removed from authentication

Ceph OSD RM 3//Delete

5.5. Copy the configuration file and admin key to each node so that you do not need to specify the monitor address and ceph.client.admin.keyring each time you execute the Ceph command line

Ceph-deploy Admin admin node1 Node2 node3

5.6. View Cluster health status

Ceph Health


6. Configure the block device (client node)

6.1. Create an image

RBD create foo--size 4096 [-M {mon-ip}] [-k/path/to/ceph.client.admin.keyring]

RBD create foo--size 4096-m node1-k/etc/ceph/ceph.client.admin.keyring


6.2. Map the image to a block device

sudo rbd map foo--name client.admin [-M {mon-ip}] [-k/path/to/ceph.client.admin.keyring]

sudo rbd map foo--name client.admin-m node1-k/etc/ceph/ceph.client.admin.keyring


6.3. Create File system

sudo mkfs.ext4-m0/dev/rbd/rbd/foo


6.4. Mount File System

sudo mkdir/mnt/ceph-block-device

sudo mount/dev/rbd/rbd/foo/mnt/ceph-block-device

Cd/mnt/ceph-block-device





1. List Storage Pools

Ceph OSD Lspools

2. Create a storage pool

Ceph OSD Pool Create Pool-name pg-num pgp-num

Ceph OSD Pool Create test 512 512

3. Delete Storage Pool

Ceph OSD Pool Delete test test--yes-i-really-really-mean-it

4. Renaming a storage pool

Ceph OSD Pool Rename current-pool-namenew-pool-name

Ceph OSD Pool Rename testtest2

5. View Storage Pool Statistics

Rados DF

6. Adjust Storage pool option values

Ceph OSD Pool Set Test size 3 Set number of copies of object

7. Get Storage Pool option values

Ceph OSD Pool Get test size Gets the number of copies of the object



1. Create a block device image

RBD create--size {megabytes} {Pool-name}/{image-name}

RBD Create--size1024x768Test/foo

2. Listing block device images

RBD ls

3. Retrieving image information

RBD Info {Image-name}

RBD Info foo

RBD Info {Pool-name}/{image-name}

RBD Info Test/foo

4. Adjust the size of the block device image

RBD Resize--size test/foo--allow-shrink Small

RBD Resize--size 4096 Test/foo

5. Remove block devices

RBD RM Test/foo



Kernel module operation

1. Mapping Block devices

sudo rbd map {pool-name}/{image-name}--id {User-name}

sudo rbd map test/foo2--id admin

If you enable CEPHX authentication, you also need to specify the key

sudo rbd map test/foo2--id admin--keyring/etc/ceph/ceph.client.admin.keyring

2. View mapped devices

RBD showmapped

3. Cancel Block Device mapping

sudo RBD unmap/dev/rbd/{poolname}/{imagename}

RBD Unmap/dev/rbd/test/foo2





This article is from the "Open Source Hall" blog, please be sure to keep this source http://kaiyuandiantang.blog.51cto.com/10699754/1784429

1. CentOS7 Installing Ceph

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.