Install CEpH in centos 6.5

Source: Internet
Author: User

Install CEpH in centos 6.5


I. Introduction

CEpH is a Linux Pb-level Distributed File System.

Ii. experiment environment
Node IP host name system version
Mon 10.57.1.110 Ceph-mon0 centos 6.5x64
MDS 10.57.1.110 Ceph-mds0 centos 6.5x64
Osd0 10.57.1.111 Ceph-osd0 centos 6.5x64
Osd1 10.57.1.111 Ceph-osd1 centos 6.5x64
Client0 10.57.1.112 Ceph-client0 centos 7.0x64

Iii. Installation Steps
1. Establish an SSH mutual trust relationship between lab machines
Generate key
Ssh-keygen-t rsa-p''
Ssh-keygen-t rsa-F. Ssh/id_rsa-p''
Copy to an authorized host
Ssh-copy-ID-I. Ssh/id_rsa.pub [email protected] Host
Configure Time Synchronization for all hosts (not detailed)
2. Add hosts resolution for each machine
Echo '10. 57.1.110 ceph-mon0'>/etc/hosts
Echo '10. 57.1.110 ceph-mds0'>/etc/hosts
Echo '10. 57.1.111 ceph-osd0 '>/etc/hosts
Echo '10. 57.1.111 ceph-osd1 '>/etc/hosts
Echo '10. 57.1.112 ceph-client0 '>/etc/hosts
3. synchronize data to/etc/(applicable to Mon, MDS, OSD, and client)

4. Update Yum and install related dependent packages (applicable to Mon, MDS, and OSD)
Rpm -- import 'https: // ceph.com/git /? P = CEpH. Git; A = blob_plain; F = keys/release. ASC'
Rpm-uvh http://mirrors.yun-idc.com/epel/6/i386/epel-release-6-8.noarch.rpm
Yum install snappy leveldb gdisk Python-argparse gperftools-libs-y
Rpm-uvh http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm
Yum install CEpH-deploy Python-pushy-y
Yum install CEpH-y
Yum install btrfs-progs (applicable to all OSD)
5. Version
CEpH version 0.67.9
6. Configure/etc/CEpH. conf (applicable to Mon, MDS, and OSD)
[Global]
Public Network = 10.57.1.0/24
PID file =/var/run/CEpH/$ name. PID
Auth cluster required = none
Auth service required = none
Auth client required = none
Keyring =/etc/CEpH/keyring. $ name
OSD pool default size = 1
OSD pool default min size = 1
OSD pool default crush rule = 0
OSD crush chooseleaf type = 1
[Mon]
Mon DATA =/var/lib/CEpH/MON/$ name
Mon clock drift allowed =. 15
Keyring =/etc/CEpH/keyring. $ name
[Mon.0]
Host = ceph-mon0
Mon ADDR = 10.57.1.110: 6789
[MDS]
Keyring =/etc/CEpH/keyring. $ name
[Mds.0]
Host = ceph-mds0
[OSD]
Osd data =/mnt/OSD $ ID
OSD recovery Max active = 5
OSD mkfs type = XFS
OSD journal =/mnt/OSD $ ID/Journal
OSD journal size = 1000
Keyring =/etc/CEpH/keyring. $ name
[Osd.0]
Host = ceph-osd0
Devs =/dev/vdb1
[Osd.1]
Host = ceph-osd1
Devs =/dev/vdb2
7. Start CEpH (executed on Mon)
Initialization: mkcephfs-a-c/etc/CEpH. conf
/Etc/init. d/CEpH-a start
8. Perform health check
CEpH health # You can also use the CEpH-s command to view the status
If health_ OK is returned, it indicates success!
9. Mount CEpH
CEpH-fuse-M 10.57.1.110: 6789/mnt/
If the following prompt is displayed, the mounting is successful:
CEpH-fuse [2010]: Starting CEpH Client
CEpH-fuse [2010]: Starting Fuse
10. Advanced
10.1. Added OSD
Two Disks/dev/vdb1,/dev/vdb2 are added on 10.57.1.110,
Configuration file added
[Osd.2]
Host = ceph-osd2
Devs =/dev/vdb1
[Osd.3]
Host = ceph-osd3
Devs =/dev/vdb2

Manually copy the configuration file to another machine for synchronization of all node configurations
CD/etc/CEpH; scp keyring. Client. Admin CEpH. conf 10.57.1.111:/etc/CEpH/

The following operations are performed on the newly added OSD node:
Initialize the newly added OSD node and run it on the machine of the newly added node. Here it runs on 10.2.180.180.
CEpH-OSD-I 2 -- mkfs -- mkkey; CEpH-OSD-I 3 -- mkfs -- mkkey;
Add Node
CEpH auth add osd.2 OSD 'Allow * 'mon 'allow rwx'-I/etc/CEpH/keyring. osd.2;
CEpH auth add osd.3 OSD 'Allow * 'mon 'Allow rwx '-I/etc/CEpH/keyring. osd.3;
CEpH OSD create # added key for osd.2
CEpH OSD create # added key for osd.3
/Etc/init. d/CEpH-A Start osd.2 # Start osd.2
/Etc/init. d/CEpH-A Start osd.3 # Start osd.3

CEpH-s # view the status
CEpH auth list # view all authentication nodes

10.2 add MDS
Add 10.57.1.111 MDs to the node
Add the following hosts record and synchronize it to the node
Echo '10. 57.1.111 ceph-mds1 '>/etc/hosts
Add the following configuration to the configuration file and synchronize it to the node.
[Mds.1]
Host = ceph-mds1
The following operations are performed on the newly added OSD node:
Generate key
CEpH-authtool -- create-keyring -- gen-key-N mds.1/etc/CEpH/keyring. mds.1
Join Certification
CEpH auth add mds.1 OSD 'Allow * 'mon 'allow rwx 'mds 'allow'-I/etc/CEpH/keyring. mds.1
Start to add MDS
/Etc/init. d/CEpH-A Start mds.1

10.3 add mon
Add 10.57.1.111 MDs to the node
Add the following hosts record and synchronize it to the node
Echo '10. 57.1.111 ceph-mon1 '>/etc/hosts
Add the following configuration to the configuration file and synchronize it to the node.
[Mon.1]
Host = ceph-mon1
Mon ADDR = 10.57.1.111: 6789

Export key and Mon Map
Mkdir/tmp/CEpH
CEpH auth get mon.-O/tmp/CEpH/keyring. Mon
CEpH mon getmap-O/tmp/CEpH/monmap

Initialize A New Mon
CEpH-mon-I 1 -- mkfs -- monmap/tmp/CEpH/monmap -- keyring/tmp/CEpH/keyring. Mon

Start a new mon
CEpH-mon-I 1 -- Public-ADDR 10.57.1.111: 6789

Add quorum votes
CEpH mon Add 1 10.57.1.111: 6789


This article is from the "past with the wind" blog and will not be reposted!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.