Ceph single/multi-node Installation summary power by CentOS 6.x

Source: Internet
Author: User

Overview

Docs:http://docs.ceph.com/docs

Ceph is a distributed file system that adds replication and fault tolerance while maintaining POSIX compatibility. The most characteristic of Ceph is the distributed meta-data server, which distributes the file location by crush (controlled Replication under scalable Hashing) this quasi-algorithm. The core of Ceph is the Rados (reliableautonomic distributed Object Store), an object cluster store, which itself provides high availability, error detection, and repair of objects.

The Ceph ecosystem architecture can be divided into four parts:

Client: Clients (data users). The client export a POSIX file system interface for the application to call, and connect mon/mds/osd for metadata and data interaction; The most primitive client uses fuse, which is now written into the kernel. A Ceph.ko kernel module needs to be compiled for use.
Mon: The cluster monitor, whose corresponding daemon program is cmon (Ceph monitor). Mon monitors and manages the entire cluster, export a network file system to the client, and the client can mount the Ceph file system via the mount-t ceph monitor_ip:/mount_point command. According to the official statement, 3 Mon can guarantee the reliability of the cluster.
MDS: The metadata server, whose corresponding daemon program is CMDS (Ceph Metadata server). Ceph can have multiple MDS components in a distributed Metadata server cluster, which involves the dynamic directory partitioning in Ceph for load balancing.
OSD: Object storage cluster, its corresponding daemon program is COSD (Ceph object Storagedevice). The OSD encapsulates a layer of local file systems, providing an interface for object storage and storing data and metadata as objects. Here the local filesystem can be EXT2/3, but Ceph believes that these file systems do not adapt to the OSD special access mode, they have previously implemented the EBOFS, and now Ceph has switched to Btrfs.

Ceph supports hundreds or even more nodes, and the above four sections are best distributed across different nodes. Of course, for basic testing, you can put the Mon and MDs on one node, or you can deploy all four parts on the same node.


Environment
Hostname IP Role FileSystem release
Master01 192.168.9.10 mon,mds,osd xfs CentOS release 6.7[2.6.32-573.8.1.el6.x86_64]
agent01 192.168.9.20 Osd,[mon,mds] xfs CentOS release 6.7[2.6.32-573.8.1.el6.x86_64]
Ocean-lab 192.168.9.70 Client XFS CentOS release 6.7[4.3.0-1.el6.elrepo.x86_64]

Version
^_^ [16:26:11][[email protected] ~] #ceph-V
Ceph version 0.80.5 (38B73C67D375A2552D8ED67843C8A65C2C0FEBA6)


Repo
Epel
Yum Install Ceph Ceph-common python-ceph
Yum Install Ceph-fuse # for Client

Host parsing
192.168.9.10 master01.ocean.org Master01
192.168.9.20 agent01.ocean.org agent01
192.168.9.70 ocean-lab.ocean.org Ocean-lab


Ceph Configuration
^_^ [16:26:15][[email protected] ~] #cat/etc/ceph/ceph.conf
[Global]
Public network = 192.168.9.0/24
PID file =/var/run/ceph/$name. pid
Auth Cluster required = None
Auth service required = None
Auth Client required = None
Keyring =/etc/ceph/keyring. $name
OSD Pool Default size = 1
OSD Pool default min size = 1
OSD Pool default Crush rule = 0
OSD Crush Chooseleaf type = 1

[Mon]
Mon data =/var/lib/ceph/mon/$name
Mon clock Drift allowed =. 15
Keyring =/etc/ceph/keyring. $name

[mon.0]
Host = Master01
Mon addr = 192.168.9.10:6789

[MDS]
Keyring =/etc/ceph/keyring. $name

[mds.0]
Host = Master01

[OSD]
OSD data =/ceph/osd$id
OSD Recovery Max active = 5
OSD MKFS type = XFS
OSD Journal =/ceph/osd$id/journal
OSD Journal size = 1000
Keyring =/etc/ceph/keyring. $name

[osd.0]
Host = Master01
Devs =/DEV/SDC1

[Osd.1]
Host = Master01
Devs =/DEV/SDC2


Start Ceph (executed on Mon)
Initialization
Mkcephfs-a-c/etc/ceph/ceph.conf
/etc/init.d/ceph-a start

Perform a health check
Ceph Health #也可以使用ceph-s Command View status
If the return is HEALTH_OK, it means success!


Mount Ceph
Mount
Upgrading the System kernel
Kernel 2.6.34 Previous version is not module RBD, the system kernel version upgrade to the latest
RPM--import http://elrepo.org/RPM-GPG-KEY-elrepo.org
RPM-UVH http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
Yum--enablerepo=elrepo-kernel Install Kernel-ml-y

After installing the kernel, modify the/etc/grub.conf configuration file to
Modify Default=1 to default=0 in the configuration file

Verifying kernel Support
#modprobe-l|grep Ceph
Kernel/fs/ceph/ceph.ko
Kernel/net/ceph/libceph.ko
#modprobe Ceph

After the machine restarts, Init 6 is active.

Mount-t Ceph 192.168.9.10:6789://mnt/ceph
[17:07:39] [Email protected] ~]$ df-th
Filesystem Type Size used Avail use% mounted on
/dev/mapper/vg_oceani-lv_root
Ext4 30G 7.7G 21G 28%/
Tmpfs tmpfs 111M 0 111M 0%/dev/shm
/DEV/SDA1 ext4 500M 94M 375M 21%/boot
192.168.9.10:/data2 NFS 30G 25G 4.0G 87%/mnt/log
192.168.9.10:6789:/ceph 172G 5.4G 167G 4%/mnt/ceph

Ceph-fuse [not tested]
Mon recommended there are at least 3, if hung off one, the service can also be used normally
Ceph-fuse-m 192.168.9.10:6789,192.168.9.20:6789/mnt/ceph



10.1. Add OSD
This is the new drive in agent01.
[15:58:07] [Email protected] ~]$ cat/etc/ceph/ceph.conf
[Global]
Public network = 192.168.9.0/24
PID file =/var/run/ceph/$name. pid
Auth Cluster required = None
Auth service required = None
Auth Client required = None
Keyring =/etc/ceph/keyring. $name
OSD Pool Default size = 1
OSD Pool default min size = 1
OSD Pool default Crush rule = 0
OSD Crush Chooseleaf type = 1

[Mon]
Mon data =/var/lib/ceph/mon/$name
Mon clock Drift allowed =. 15
Keyring =/etc/ceph/keyring. $name

[mon.0]
Host = Master01
Mon addr = 192.168.9.10:6789

[MDS]
Keyring =/etc/ceph/keyring. $name

[mds.0]
Host = Master01

[OSD]
OSD data =/ceph/osd$id
OSD Recovery Max active = 5
OSD MKFS type = XFS
OSD Journal =/ceph/osd$id/journal
OSD Journal size = 1000
Keyring =/etc/ceph/keyring. $name

[Osd.2]
Host = agent01
Devs =/DEV/SDC1

[Osd.3]
Host = agent01
Devs =/DEV/SDC2


Master01 ~ $ cd/etc/ceph; SCP Keyring.client.admin agent01:/etc/ceph/
The following actions are done on the new OSD node
Initializes the new OSD node and needs to run on the new node machine, which runs on the 10.2.180.180
Ceph-osd-i 2--mkfs--mkkey;
Ceph-osd-i 3--mkfs--mkkey;

Join node
Ceph auth Add Osd.2 osd ' Allow * ' mon ' Allow rwx '-i/etc/ceph/keyring.osd.2;
Ceph auth Add Osd.3 osd ' Allow * ' mon ' Allow rwx '-i/etc/ceph/keyring.osd.3;
Ceph OSD Create #added key for Osd.2
Ceph OSD Create #added key for Osd.3
Ceph OSD RM Osd_num # Remove OSD

/etc/init.d/ceph-a start Osd.2 #启动osd. 2
/etc/init.d/ceph-a start Osd.3 #启动osd. 3
/etc/init.d/ceph-a Start OSD #启动所有osd
Ceph-s #查看状态
Ceph Auth List #能查看所有认证节点



Increase MDS
Add agent01 MDS to Node
Add the following configuration to the configuration file and synchronize to the node
[Mds.1]
Host = agent01
The following actions are done on the new OSD node
Generate key
Ceph-authtool--create-keyring--gen-key-n Mds.1/etc/ceph/keyring.mds.1
Join the Certification
Ceph auth Add mds.1 osd ' Allow * ' mon ' Allow rwx ' mds ' Allow '-i/etc/ceph/keyring.mds.1
Start new MDS
/etc/init.d/ceph-a Start Mds.1



Add Mon
Add agent01 MDS to Node

Add the following configuration to the configuration file and synchronize to the node
[Mon.1]
Host = agent01
Mon addr = 192.168.9.20:6789

Export Key and Mon map
Mkdir/tmp/ceph
Ceph auth get Mon.-o/tmp/ceph/keyring.mon
Ceph Mon getmap-o/tmp/ceph/monmap

Initialize New Mon
Ceph-mon-i 1--mkfs--monmap/tmp/ceph/monmap--keyring/tmp/ceph/keyring.mon

Start new Mon
Ceph-mon-i 1--public-addr 192.168.9.20:6789

Join quorum votes
Ceph Mon Add 1 192.168.9.20:6789

This article is from the "Jeffrey blog" blog, please be sure to keep this source http://oceanszf.blog.51cto.com/6268931/1716896

Ceph single/multi-node Installation summary power by CentOS 6.x

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.