CentOS7 install Ceph

Source: Internet
Author: User

CentOS7 install Ceph

1. installation environment

| ----- Node1 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.

|

| ----- Node2 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.

Admin ----- |

| ----- Node3 (mon, osd) sda is the system disk, and sdb and sdc are the osd disks.

|

| ----- Client

Ceph Monitors uses port 6789 for communication by default, and OSD uses port 6800: 7300 for communication by default.

2. Preparations (all nodes)

2.1 modify an IP address

Vim/etc/sysconfig/network-scripts/ifcfg-em1

IPADDR = 192.168.130.205

NETMASK = 255.255.255.0

GATEWAY = 192.168.130.2

2.2 disable the Firewall

Systemctl stop firewalld. service # stop firewall

Systemctl disable firewalld. service # disable firewall startup

Firewall-cmd -- state # view the firewall Status

2.3 modify yum Source

Cd/etc/yum. repos. d

Mv CentOS-Base.repo CentOS-Base.repo.bk

Wget http://mirrors.163.com/.help/CentOS6-Base-163.repo

Yum makecache

2.4 modify the time zone

Cp/usr/share/zoneinfo/Asia/Shanghai/etc/localtime

Yum-y install ntp

Systemctl enable ntpd

Systemctl start ntpd

Ntpstat

2.5 modify hosts

Vim/etc/hosts

192.168.130.205 admin

192.168.130.204 client

192.168.130.203 node3

192.168.130.202 node2

192.168.130.201 node1

2.6 install the epel repository, add the yum ceph repository, and update the software repository.

Install epel Repository

Rpm-vih http://mirrors.sohu.com/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm

Add a yum ceph Repository

Vim/etc/yum. repos. d/ceph. repo

[Ceph]

Name = Ceph noarch packages

Base url = http://mirrors.163.com/ceph/rpm-hammer/el7/x86_64/

Enabled = 1

Gpgcheck = 1

Type = rpm-md

Gpgkey = http://mirrors.163.com/ceph/keys/release.asc

2.7 install ceph-deploy and ceph (ceph is installed on all nodes, and ceph-deploy is only installed on the admin node)

Yum-y update & yum-y install -- release hammer ceph-deploy

3. Allow SSH login without a password (admin node)

3.1 generate an SSH key pair and press Enter when "Enter passphrase" is displayed. The password is blank:

Ssh-keygen

3.2. Copy the public key to all nodes.

Ssh-copy-id root @ node1

Ssh-copy-id root @ node2

Ssh-copy-id root @ node3

Ssh-copy-id root @ client

3.3 verify that you can log on without a password through SSH

Ssh node1

Ssh node2

Ssh node3

Ssh client

4. Create a Monitor (admin node)

4.1 create a monitor on node1, node2, and node3

Mkdir myceph

Cd myceph

Ceph-deploy new node1 node2 node3

4.2 modify the number of copies of osd and add the default size of osd pool = 2 to the end

Vim/etc/ceph. conf

Osd pool default size = 2

4.3 configure the initial monitor (s) and collect all keys

Ceph-deploy mon create-initial

5. Create An OSD (admin node)

5.1 list Disks

Ceph-deploy disk list node1

Ceph-deploy disk list node2

5.2. Clean the disk

Ceph-deploy disk zap node1: sdb

Ceph-deploy disk zap node1: sdc

Ceph-deploy disk zap node2: sdb

Ceph-deploy disk zap node2: sdc

Ceph-deploy disk zap node3: sdb

Ceph-deploy disk zap node3: sdc

5.3 prepare OSD

Ceph-deploy osd prepare node1: sdb

Ceph-deploy osd prepare node1: sdc

Ceph-deploy osd prepare node2: sdb

Ceph-deploy osd prepare node2: sdc

Ceph-deploy osd prepare node3: sdb

Ceph-deploy osd prepare node3: sdc

Ceph-deploy osd activate node1: sdb1

Ceph-deploy osd activate node1: sdc1

Ceph-deploy osd activate node2: sdb1

Ceph-deploy osd activate node2: sdc1

Ceph-deploy osd activate node3: sdb1

Ceph-deploy osd activate node3: sdc1

5.4 Delete OSD

Ceph osd out osd.3

Ssh node1 service ceph stop osd.3

Ceph osd crush remove osd.3

Ceph auth del osd.3 // Delete from Authentication

Ceph osd rm 3 // Delete

5.5 copy the configuration file and admin key to each node, so that you do not need to specify the monitor address and Ceph. client. admin. keyring each time you execute the ceph command.

Ceph-deploy admin node1 node2 node3

5.6 view Cluster health status

Ceph health

6. Configure the block device (client node)

6.1 create an image

Rbd create foo -- size 4096 [-m {mon-IP}] [-k/path/to/ceph. client. admin. keyring]

Rbd create foo -- size 4096-m node1-k/etc/ceph. client. admin. keyring

6.2 map an image to a block Device

Sudo rbd map foo -- name client. admin [-m {mon-IP}] [-k/path/to/ceph. client. admin. keyring]

Sudo rbd map foo -- name client. admin-m node1-k/etc/ceph. client. admin. keyring

6.3 create a file system

Sudo mkfs. ext4-m0/dev/rbd/foo

6.4 mount a File System

Sudo mkdir/mnt/ceph-block-device

Sudo mount/dev/rbd/foo/mnt/ceph-block-device

Cd/mnt/ceph-block-device

1. List storage pools

Ceph osd lspools

2. Create a storage pool

Ceph osd pool create pool-name pg-num pgp-num

Ceph osd pool create test 512 512

3. delete a storage pool

Ceph osd pool delete test -- yes-I-really-mean-it

4. Rename the storage pool

Ceph osd pool rename current-pool-name new-pool-name

Ceph osd pool rename test test2

5. View storage pool statistics

Rados df

6. Adjust the storage pool option value

Ceph osd pool set test size 3 sets the number of object copies

7. Obtain the storage pool option value

Ceph osd pool get test size get object copy count

1. Create a block device Image

Rbd create -- size {megabytes} {pool-name}/{image-name}

Rbd create -- size 1024 test/foo

2. List block device Images

Rbd ls

3. Retrieve Image Information

Rbd info {image-name}

Rbd info foo

Rbd info {pool-name}/{image-name}

Rbd info test/foo

4. Adjust the image size of a block Device

Rbd resize -- size 512 test/foo -- allow-shrink smaller

Rbd resize -- size 4096 test/foo increase

5. Delete Block devices

Rbd rm test/foo

Kernel module operations

1. Map Block devices

Sudo rbd map {pool-name}/{image-name} -- id {user-name}

Sudo rbd map test/foo2 -- id admin

If you enable cephx authentication, you also need to specify the key

Sudo rbd map test/foo2 -- id admin -- keyring/etc/ceph. client. admin. keyring

2. view the mapped Devices

Rbd showmapped

3. Cancel block device ing

Sudo rbd unmap/dev/rbd/{poolname}/{imagename}

Rbd unmap/dev/rbd/test/foo2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.