Ceph Distributed Storage Setup Experience

Source: Internet
Author: User

Official Document: http://docs.ceph.com/docs/master/start/quick-start-preflight/

Chinese version: http://docs.openfans.org/ceph/

Principle: Using the Ceph-deploy tool, through the management node Admin-node, using the SSH channel, in order to achieve the control of each distributed node storage sharing function.

650) this.width=650; "Src=" http://docs.ceph.com/docs/master/_images/ Ditaa-5d5cab6fc315585e5057a743b5af7946fba43b24.png "alt=" DITAA-5D5CAB6FC315585E5057A743B5AF7946FB "/> 650) this.width=650; "alt=" INTRO to Ceph (introduction ceph) "Src=" http://docs.openfans.org/ceph/56fe7247/ Intro-to-ceph-4ecb7ecdceph "/>

Prerequisite: Admin-node need to install the ceph-deploy in advance; each distribution node needs to synchronize with Admin-node time, configure SSH to trust each other and have sudo permission; the firewall is off.

Main steps:

To create a new directory, mkdir myceph && CD Myceph, the following operations are Admin-node operation, we named: ceph-mds,ceph-osd1,ceph-osd2,ceph-client , such as directory operations that have previously been in a ceph environment, may affect the ceph environment.

These files are generally available under the Ceph directory:

Ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.bootstrap-rgw.keyring ceph.client.admin.keyring ceph.conf Ceph.log ceph.mon.keyring RELEASE.ASC

1.Start over:

Ceph-deploy purgedata {Ceph-node} [{Ceph-node}] # #清空数据 Ceph-deploy Forgetkeys # #删除之前生成的密钥 Ceph-deploy purge {ceph-node} [{Ceph-node}] # #卸载ceph软件 If You execute purge, you must re-inst All Ceph.

2.Start up:

1, create the cluster.  ---->ceph-deploy new {initial-monitor-node (s)}     2, Change the default number of replicas in the ceph  configuration file from 3 to 2 so that Ceph can  Achieve an active + clean state with just two ceph osds.  add the following line under the [global] section:---> osd  pool default size = 23, if you have more than one  network interface, add the public network setting under the [ Global] section of your ceph configuration file. see the network  Configuration Reference for details. ----->public network = { Ip-address}/{netmask}   &nBsp;4, install ceph. ----> ceph-deploy install {ceph-node}[{ceph-node}&nbsp ...]     5, Add the initial monitor (s)  and gather the keys:  ----> ceph-deploy mon create-initial/stat/remove/once you complete the  process, your local directory should have the following keyrings :        {cluster-name}.client.admin.keyring         {cluster-name}.bootstrap-osd.keyring        { Cluster-name}.bootstrap-mds.keyring        {cluster-name}. Bootstrap-rgw.keyring

3, add/remove Osds:

1, list disks :to list the disks on a node, execute the  following command: ---> ceph-deploy disk list {node-name [node-name] ...} 2, zap disks:to zap a disk  (delete its partition table)  in  preparation for use with ceph, execute the following:     ceph-deploy disk zap {osd-server-name}:{disk-name}    ceph-deploy  disk zap osdserver1:sda3, prepare osds:    ceph-deploy osd  PREPARE CEPH-OSD1:/DEV/SDA CEPH-OSD1:/DEV/SDB4, activate the osds:     CEPH-DEPLOY OSD ACTIVATE CEPH-OSD1:/DEV/SDA1 CEPH-OSD1:/DEV/SDB15, Use ceph-deploy  to copy the configuration file and admin key to your  Admin node and your ceph nodes so that you can use the ceph cli  without having to specify the monitor address and  ceph.client.admin.keyring each time you execute a command.  -----> Ceph-deploy admin {admin-node} {ceph-node}6, ensure that you have the  correct permissions for the ceph.client.admin.keyring.sudo chmod +r /etc/ Ceph/ceph.client.admin.keyring7, Check your cluster ' S health.     ceph  health/status

Should be able to see:

[email protected]:/home/megaium/Myceph# Ceph Status
Cluster 3734cac3-4553-4c39-89ce-e64accd5a043
Health Health_warn
Clock skew detected on Mon.ceph-OSD2
8PGS degraded
8pgs stuck degraded
thepgs stuck unclean
8pgs stuck undersized
8pgs undersized
Recovery1004/1506Objects degraded (66.667%)
Recovery1/1506Objects misplaced (0.066%)
Too few PGs per OSD (6< min -)
Monitor Clock Skew detected
Monmap E1:2Mons at {ceph-osd1=192.168.2.242:6789/0, ceph-osd2=192.168.2.243:6789/0}
Election Epoch8, quorum0,1ceph-osd1,ceph-OSD2
Osdmap e135: -OSDS: -Up - inch; -remapped pgs
Flags Sortbitwise
Pgmap v771: thePGs2Pools,1742MB data,502objects
4405MB used,89256GB/89260GB Avail
1004/1506Objects degraded (66.667%)
1/1506Objects misplaced (0.066%)
-Active+remapped
8active+undersized+degraded

[Email protected]:/home/megaium/myceph# ceph Healthhealth_warn Clock skew detected on MON.CEPH-OSD2; 8 pgs degraded; 8 pgs stuck degraded; Unclean pgs stuck; 8 pgs stuck undersized; 8 pgs undersized;  Recovery 1004/1506 Objects degraded (66.667%); Recovery 1/1506 objects misplaced (0.066%); too few PGs per OSD (6 < min) 30); Monitor Clock Skew detected

4. Verify the command:

Ceph osd tree View status ceph OSD Dump view OSD configuration information ceph OSD RM Delete node remove OSD (s) <id> [<id> ...] Ceph OSD Crush RM osd.0 Remove an OSD hard disk in the cluster crush map ceph OSD Crush RM node1 Remove an OSD host node in the cluster CEPH-W/-S Ceph MD S stat View status Ceph MDS dump View status

5, client-side Mount disk

Ceph-deploy Install Ceph-client # #安装ceph客户端

Ceph-deploy Admin Ceph-client # #把秘钥及配置文件拷贝到客户端

RBS mode:

To apply a ceph block store on the client to create a new ceph pool[[email protected] ceph]# rados mkpool  Test creates a new mirror in the pool [[email protected] ceph]# rbd create test-1 --size 4096  -p test -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring     ("-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring" can not be added)    Map the image to the pool block device [[email protected] ceph]#  rbd map test-1 -p test - -name client.admin  -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring    ("-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring" can not be added)    View RBD mappings [[Email protected] ~]# rbd showmappedid pool    image        snap device    0  rbd      foo     &nbsP;   -    /dev/rbd0 1  test    test-1       -    /dev/rbd1 2  jiayuan  jiayuan-img -    /dev/rbd2 3  jiayuan zhanguo      -    /dev/rbd3 4  jiayuan zhanguo-5G  -     /dev/rbd4  format the newly created mirrored ceph block [[Email protected] dev]# mkfs.ext4 -m0 &NBSP;/DEV/RBD1 Create a new mount directory [[email protected] dev]# mkdir /mnt/ Ceph-rbd-test-1 mount the newly created mirrored Ceph block to the Mount directory [[email protected] dev]# mount /dev/rbd1 /mnt/ ceph-rbd-test-1/Viewing mount status [[email protected] dev]# df -hfilesystem             size  used avail use% mounted on /dev/sda2              19g  2.5g   15g  15% /tmpfs                  116m   72k   116M   1% /dev/shm/dev/sda1              283m   52m  213m  20% /boot/dev/rbd1              3.9G  8.0M   3.8g   1% /mnt/ceph-rbd-test-1 complete the above steps to save data to the newly created Ceph file system.

If error:

[Email protected]:/home/megaium# RBD Create mypool/myimage--size 1024002016-01-28 09:56:40.605656 7f56cb67f7c0 0 Librad Os:client.admin authentication Error (1) operation not PERMITTEDRBD:COULDN ' t connect to the cluster!

To view logs:

2016-01-27 22:44:17.755764 7ffb8a1fa8c0 0 mon.ceph-client does not exist in Monmap, would attempt to join an existing Clus ter2016-01-27 22:44:17.756031 7ffb8a1fa8c0-1 No public_addr or public_network specified, and mon.ceph-client not present In Monmap or ceph.conf

The ceph.conf configuration needs to be changed on the MDS side:

[GLOBAL]FSID = 3734cac3-4553-4c39-89ce-e64accd5a043mon_initial_members = CEPH-OSD1, Ceph-osd2mon_host = 192.168.2.242,192.168.2.243auth_cluster_required = cephxauth_service_required = cephxauth_client_required = Cephxfilestore_xattr_use_omap = Trueosd pool Default size = 2public Network = 192.168.2.0/24

Then issued the configuration and key: Ceph-deploy--overwrite-conf ceph-osd1 ceph-osd2 ceph-client.

CEPHFS File system mode:

Establishing the CEPHFS file system on the client [[email protected] ~]# mkdir/mnt/mycephfs[[email protected] ~]# mount-t ceph 10.240.240.211:6789://M Nt/mycephfs-v-o name=admin,secret=aqdt9pntsfd6nraaozkagx21ugq+dm/k0rzxow== 10.240.240.211:6789:/ON/MNT/MYCEPHFS Type Ceph (rw,name=admin,secret=aqdt9pntsfd6nraaozkagx21ugq+dm/k0rzxow==) #上述命令中的name和secret参数值来自monitor的/etc/ ceph/keyring file: [[email protected] ~]# cat/etc/ceph/ceph.client.admin.keyring [client.admin] key = Aqdt9pntsfd6nraa ozkagx21ugq+dm/k0rzxow==

If you have any questions, please feel free to contact me.

This article is from the creator think blog, so be sure to keep this source http://strongit.blog.51cto.com/10020534/1739488

Ceph Distributed Storage Setup Experience

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.