Easily build a Ceph cluster on SUSE Linux Enterprise Server 11 SP3

Source: Internet
Author: User

Easily build a Ceph cluster on SUSE Linux Enterprise Server 11 SP3

You can easily build a Ceph cluster on SUSE Linux Enterprise Server 11 SP3.

Environment Introduction:

One mon node, one mds node, and three osd nodes:

192.168.239.131 ceph-mon
192.168.239.132 ceph-mds

192.168.239.160 ceph-osd0
192.168.239.161 ceph-osd1
192.168.239.162 ceph-osd2

1. register an account from the official suse.com website and download the ISO of SLES 11 SP3 and SUSE Cloud 4

2. Install the system for each node, and then set two installation sources, one OS and one SUSE Cloud 4.

3. Configure ceph-mon to log on to ssh without a password for the root user of another node.

4. Copy/etc/hosts of the ceph-mon node to another node.

5. Install ceph

Zypper-n install ceph-radosgw

6. On the ceph-mon node, use setup. sh to call the init-mon.sh, init-osd.sh, init-mds.sh to automatically configure mon, osd, mds, respectively.

Setup. sh and init-mon.sh Go To The./ceph folder under the current directory, be sure to execute in a directory other than/etc.


The code for each script is as follows (for reference only ):

(1) setup. sh

#! /Bin/bash
 
### Stop all existed OSD nodes
Printf "Killing all ceph-osd nodes ..."
For I in 0 1 2; do
Ssh ceph-osd $ I "killall-TERM ceph-osd"
Sleep 1
Done
Printf "Done \ n"
 
### Initialize mon on this system
Killall-TERM ceph-mon
Printf "Initializing ceph-mon on current node ..."
/Init-mon.sh
Cd./ceph
Printf "Done \ n"
 
### Initialize osd services on nodes
For I in 0 1 2; do
../Init-osd.sh ceph-osd $ I
Sleep 1
Done
 
### Initialize mds on remote node
Printf "Initializing mds on ceph-mds ..."
../Init-mds.sh ceph-mds
Printf "Done \ n"

(2) init-mon.sh

#! /Bin/bash
 
Fsid = $ (uuidgen)
Mon_node = $ (hostname)
Mon_ip = 192.168.239.131
Cluster_net = 192.168.239.0/24
Public_net = 192.168.1.0/24
Mon_data =/data/$ mon_node
 
Killall-TERM ceph-mon
 
Rm-f/etc/ceph. conf/etc/ceph/*. keyring
Rm-f/var/lib/ceph/bootstrap-mds/*/var/lib/ceph/bootstrap-osd /*
Rm-f/var/log/ceph/*. log
 
Confdir =./ceph
Rm-fr $ confdir
Mkdir-p $ confdir
Cd $ confdir
 
Rm-fr $ mon_data
Mkdir-p $ mon_data
 
Cat> ceph. conf <EOF
[Global]
Fsid = $ fsid
Mon initial members = $ mon_node
Mon host = $ mon_ip
Public network = $ public_net
Cluster network = $ cluster_net
Auth cluster required = cephx
Auth service required = cephx
Auth client required = cephx
Osd journal size = 1024
Filestore xattr use omap = true
EOF
 
Ceph-authtool -- create-keyring bootstrap-osd.keyring -- gen-key-n client. bootstrap-osd
Ceph-authtool -- create-keyring bootstrap-mds.keyring -- gen-key-n client. bootstrap-mds
 
Ceph-authtool -- create-keyring ceph. mon. keyring -- gen-key-n mon. -- cap mon 'Allow *'
Ceph-authtool -- create-keyring ceph. client. admin. keyring -- gen-key-n client. admin -- set-uid = 0 -- cap mon 'Allow * '-- cap osd 'Allow *' -- cap mds 'allow'
Ceph-authtool ceph. mon. keyring -- import-keyring ceph. client. admin. keyring
 
Monmaptool -- create -- add $ mon_node $ mon_ip -- fsid $ (grep fsid ceph. conf | awk '{print $ NF}') monmap
 
Cp-a ceph. conf/etc/ceph
Cp-a ceph. client. admin. keyring/etc/ceph
 
### Make filesystem for ceph-mon
Ceph-mon -- mkfs-I $ mon_node -- monmap -- keyring ceph. mon. keyring -- mon-data $ mon_data
 
### Start the ceph-mon service
Ceph-mon-I $ mon_node -- mon-data $ mon_data
 
### Initialize bootstrap keyrings
Ceph auth add client. bootstrap-mds mon 'Allow profile bootstrap-mds '-I bootstrap-mds.keyring
Ceph auth add client. bootstrap-osd mon 'Allow profile bootstrap-ossd '-I bootstrap-osd.keyring

(3) init-osd.sh

#! /Bin/bash
 
If [$ #-lt 2]; then
Printf "Usage: $0 {host} {osd num} \ n" $0
Exit 1
Fi
 
Host = $1
Osd_num = $2
 
Ssh $ host "killall-TERM ceph-osd"
Ssh $ host "rm-f/var/lib/ceph/bootstrap-osd/* keyring"
Ssh $ host "rm-fr/data/osd. $ osd_num /*"
 
Ssh $ host "mkdir-p/var/lib/ceph/bootstrap-osd"
Ssh $ host "mkdir-p/data/osd. $ osd_num"
 
Scp ceph. conf ceph. client. admin. keyring $ host:/etc/ceph
Scp bootstrap-osd.keyring $ host:/var/lib/ceph/bootstrap-osd/ceph. keyring
 
Ssh $ host "ceph osd create"
Ssh $ host "ceph-osd-I $ osd_num -- osd-data/osd. $ osd_num -- osd-journal/data/osd. $ osd_num/journal -- mkfs -- mkkey"
Ssh $ host "ceph auth add osd. $ osd_num osd 'Allow * 'mon 'Allow profile osd'-I/data/osd. $ osd_num/keyring"
Ssh $ host "ceph osd crush add-bucket $ host"
Ssh $ host "ceph osd crush move $ host root = default"
Ssh $ host "ceph osd crush add osd. $ osd_num 1.0 host = $ host"
Ssh $ host "ceph-osd-I $ osd_num -- osd-data/osd. $ osd_num -- osd-journal/data/osd. $ osd_num/journal"

(4) init-mds.sh

#! /Bin/bash
 
If [$ #-lt 1]; then
Printf "Usage: $0 {host }}\ n" $0
Exit 1
Fi
 
 
Mds_host = $1
Mds_name = mds. $ mds_host
Mds_data =/data/$ mds_name
Keyfile = ceph. $ mds_host.keyring
Mon_host = ceph-mon: 6789
 
### Stop current running mds daemons first
Ssh $ mds_host "killall-TERM ceph-mds"
Ssh $ mds_host "rm-f $ mds_data /*"
Ssh $ mds_host "mkdir-p $ mds_data"
 
### Clean the old keyring file first
Rm-f $ keyfile
 
### Create new keyring file
Ceph-authtool-C-g-n $ mds_name $ keyfile
Ceph auth add $ mds_name mon 'Allow profile mds 'ossd' allow rwx 'mds 'allow'-I $ keyfile
 
Scp \
/Etc/ceph. conf \
/Etc/ceph. client. admin. keyring $ mds_host:/etc/ceph
Scp $ keyfile $ mds_host: $ mds_data/keyring
 
Ssh $ mds_host "ceph-mds-I $ mds_host-n $ mds_name-m $ mon_host -- mds-data =/data/mds. $ mds_host"

After the script is executed, the service is automatically started. On the ceph-mon node, view the ceph cluster status:

Ceph-mon :~ # Ceph-s
Cluster 266900a9-b1bb-4b1f-9bd0-c509578aa9c9
Health HEALTH_ OK
Monmap e1: 1 mons at {ceph-mon = 192.168.239.131: 6789/0}, election epoch 2, quorum 0 ceph-mon
Mdsmap e4: 1/1/1 up {0 = ceph-mds = up: active}
Osdmap e17: 3 osds: 3 up, 3 in
Pgmap v23: 192 pgs, 3 pools, 1884 bytes data, 20 objects
3180 MB used, 45899 MB/49080 MB avail
192 active + clean


Osd status:

Ceph-mon :~ # Ceph osd tree
# Id weight type name up/down reweight
-1 3 root default
-2 1 host ceph-osd0
0 1 osd.0 up 1
-3 1 host ceph-osd1
1 1 osd.1 up 1
-4 1 host ceph-osd2
2 1 osd.2 up 1
 

View the process on the ceph-mon node:

Ceph-mon :~ # Ps ax | grep ceph-mon
8993 pts/0 Sl 0: 00 ceph-mon-I ceph-mon -- mon-data/ceph-mon

View the process on the ceph-osdX node:

Ceph-osd0 :~ # Ps ax | grep ceph-osd
13140? Ssl 0: 02 ceph-osd-I 0 -- osd-data/osd.0 -- osd-journal/data/osd.0/journal

View the process on the ceph-mds node:

Ceph-mds :~ # Ps ax | grep ceph-mds
42260? Ssl 0: 00 ceph-mds-I ceph-mds-n mds. ceph-mds-m ceph-mon: 6789 -- mds-data =/data/mds. ceph-mds

7. Because the SLES 11 kernel does not support the ceph module, you need to install a later version of the kernel on the client to obtain the mount. ceph function. The mount. ceph command is used as follows:

Mount. ceph {mon ip/host }:/ {mount point}-o name = admin, secret = {your keyring}

Mount. ceph-mon: // mnt/cephfs-v-o name = admin, secret = AQD5jp5UqPRtCRAAvpRyhlNI0 + qEHjZYqEZw8A =

View the Mount status:

Ceph-mon:/etc/ceph # df-Ph
File System capacity used available % mount point
/Dev/mapper/rootvg-root 12G 5.3G 5.7G 49%/
Udev 12G 5.3G 5.7G 49%/dev
Tmpfs 12G 5.3G 5.7G 49%/dev/shm
/Dev/sda1 185 M 36 M 141 M 21%/boot
/Dev/sdb1 16G 35 M 16G 1%/data
192.168.239.131:/48G 3.2G 45G 7%/mnt/cephfs

-------------------------------------- Split line --------------------------------------

Ceph environment configuration document PDF

Deploying Ceph on CentOS 6.3

Ceph Installation Process

HOWTO Install Ceph On FC12 and FC Install Ceph Distributed File System

Ceph File System Installation

CentOS 6.2 64-bit installation of Ceph 0.47.2

Ubuntu 12.04 Distributed File System (Ceph)

Install Ceph 0.24 on Fedora 14

-------------------------------------- Split line --------------------------------------

Ceph details: click here
Ceph: click here

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.