Basic installation of Ceph

Source: Internet
Author: User

First, the basic Environment introduction:

This article uses the Ceph-deploy tool for ceph installation,Ceph-deploy can be used as a single admin node or in any node node.

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M00/82/A1/wKiom1ddI6STpLAkAAAvJVPW-Z4918.png "title=" 1.png " alt= "Wkiom1ddi6stplakaaavjvpw-z4918.png"/>

The system environment is as follows:

1, the system using redhat-6.5x86_64 basic_server installation, a total of 3 nodes, NTP time synchronization

2, close selinux, with Epel and ceph official source, version 0.86

3, 3 nodes have done mutual trust, and configure the host, each node has 3 disks to do the OSD

4, upgrade the kernel to 3.18, and start from the new kernel.


Second, installation steps:

1. set the iptables rule on each node or turn off iptables (eth0 refers to the name of the NIC on which the Ceph network resides )

Iptables-a input-i eth0-p tcp-s0.0.0.0/0--dport 6789-j acceptiptables-a input-i eth0-m multiport-ptcp-s 0.0.0.0 /0--dports 6800:7300-j acceptservice iptables Save

2. format and Mount OSD

Yum-y Install xfsprogsmkfs.xfs/dev/sdbmkdir/osd{0..2} #blkid view SDB's Uuidecho ' uuid= 89048e27-ff01-4365-a103-22e95fb2cc93/osd0 XFS noatime,nobarrier,nodiratime >>/etc/fstab

one disk corresponds to an OSD, each node is created osd0,osd1,osd2 directory, the corresponding disk mount the corresponding directory.

3. Installing the Ceph deployment Tool

#mkdir Ceph #最好创建一个目录 Because some files are generated in the installation directory during the installation of Ceph #cd ceph#yum-y install Ceph-deploy

4. Create Mon

ceph-deploy new node1 nod2 node3  #这个命令其实就是仅仅生成了ceph. conf and ceph.mon.keyring  file vim  ceph.conf  Append the following (change as required) Debug_ms = 0mon_clock_drift_allowed = 1osd_pool_default _size = 2     #副本数量osd_pool_default_min_size  = 1osd_pool_default_pg_num  = 128   #pg数量osd_pool_default_pgp_num  = 128osd_crush_chooseleaf_type =  0debug_auth = 0/0debug_optracker = 0/0debug_monc = 0/0debug_crush =  0/0debug_buffer = 0/0debug_tp = 0/0debug_journaler = 0/0debug_journal  = 0/0debug_lockdep = 0/0debug_objclass = 0/0debug_perfcounter = 0/ 0debug_timer = 0/0debug_filestore = 0/0debug_context = 0/0debug_finisher =  0/0debug_heartbeatmap = 0/0debug_asok = 0/0debug_throttle = 0/0debug_osd  = 0/0debug_rgw = 0/0debug_mon = 0/0osd_max_backfills = 4filestore_split_multiple =  8filestore_fd_cache_size = 1024filestore_queue_committing_max_bytes =1048576000filestore_queue_ Max_ops = 500000filestore_queue_max_bytes = 1048576000filestore_queue_committing_max_ops  = 500000osd_max_pg_log_entries = 100000osd_mon_heartbeat_interval = 30 # Performance tuning filestoreosd_mount_options_xfs =rw,noatime,logbsize=256k,delaylog#osd_ journal_size = 20480   log size, not specified, default is 5gosd_op_log_threshold = 50osd_min_pg_log_ Entries = 30000osd_recovery_op_priority = 1osd_mkfs_options_xfs = -f -i  size=2048osd_mkfs_type = xfsosd_journal =/var/lib/ceph/osd/$cluster-$id/journaljournal_ queue_max_ops = 500000journal_max_write_bytes = 1048576000journal_max_write_entries =  100000journal_queue_max_bytes = 1048576000objecter_infilght_op_bytes = 1048576000objecter_inflight_ops =  819200ms_dispatch_throttle_bytes = 1048576000sd_data = /var/lib/ceph/osd/$cluster-$idmerge _threshold = 40backfills = 1mon_osd_min_down_reporters = 13mon_osd_down_out_ interval = 600rbd_cache_max_dirty_object = 0rbd_cache_target_dirty =  235544320rbd_cache_writethrough_until_flush = falserbd_cache_size = 335544320rbd_cache_max _dirty = 335544320rbd_cache_max_dirty_age = 60rbd_cache = false

5. Installing Ceph

#On all nodes to Install

Yum-y Install Ceph

the admin node executes:

Ceph-deploy Mon Create Node1 node2 node3ceph-deploy Gatherkeys node1 #从moniter节点获得keys for managing nodes


6. Create an active OSD

Ceph-deploy OSD Prepare NODE1:/OSD0NODE1:/OSD1 node1:/osd2 node2:/osd0 NODE2:/OSD1 node2:/osd2 NODE3:/OSD0NODE3:/OSD1 Node3:/osd2ceph-deploy OSD Activate NODE1:/OSD0NODE1:/OSD1 node1:/osd2 node2:/osd0 node2:/osd1 node2:/osd2 node3:/ OSD0NODE3:/OSD1 node3:/osd2 ceph-deploy admin node1 node2 node3 #从admin节点复制配置文件及key到nodechmod +r/etc/ceph/ Ceph.client.admin.keyring (all nodes add read access to the file)

You can also create a related pool

Ceph OSD Pool Create volumes 128ceph OSD pool create images 128ceph OSD Pool create VMS 128ceph OSD Pool Create backups 12 8

650) this.width=650; "src=" Http://s1.51cto.com/wyfs02/M00/82/9F/wKioL1ddJPzwowP1AAAkuJqOoAU556.png "style=" float: none; "title=" 11.png "alt=" Wkiol1ddjpzwowp1aaakujqooau556.png "/>

650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M02/82/9F/wKioL1ddJP3Q_-jPAAA9Ba1Z6HI170.png "style=" float: none; "title=" 1.png "alt=" Wkiol1ddjp3q_-jpaaa9ba1z6hi170.png "/>


This article from "Life is endless, struggle not only!" "Blog, be sure to keep this provenance http://linuxnote.blog.51cto.com/9876511/1788333

Basic installation of Ceph

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.