Ceph Cluster Expansion

Source: Internet
Author: User
Tags free ssh

Ceph Cluster Expansion

  • IP
  • Hostname
  • Description
  • 192.168.40.106
  • Dataprovider
  • Deployment Management Node
  • 192.168.40.107
  • Mdsnode
  • MON Node
  • 192.168.40.108
  • Osdnode1
  • OSD Node
  • 192.168.40.148
  • Osdnode2
  • OSD Node
The previous article describes how to create a cluster with the following structure. This article describes how to expand the cluster, including adding the OSD process to mdsnode and the CephMetadata service.
Add the Ceph Monitor service to osdnode1 and osdnode2
The final result of adding a new OSD node osdnode3 is as follows:
IP Hostname Description
192.168.40.106 Dataprovider Deployment Management Node
192.168.40.107 Mdsnode MDS, MON node, OSD Node
192.168.40.108 Osdnode1 OSD node, MON Node
192.168.40.148 Osdnode2 OSD node, MON Node
192.168.40.125 Osdnode3 OSD Node
Extend the OSD function of the MON node to switch to the leadorceph user on dataprovider and enter the/home/leadorceph/my-cluster directory.
OSD service for adding mdsnode nodes
    ssh node1    sudo mkdir /var/local/osd2    exit
Use the ceph-deploy command to create osd.
 
ceph-deploy --overwrite-conf osd prepare mdsnode:/var/local/osd2
Activate the created osd
 
 
ceph-deploy osd activate mdsnode:/var/local/osd2

After executing the preceding command, Ceph will re-adjust the entire cluster and migrate PG to the new OSD. You can use ceph-w to find that the cluster status has changed.

Add a new OSD Node

After the OSD function is added to an existing MON node, the OSD function will be added to a new osdnode3.

Modify the/etc/hosts information of the dataprovider Node

Add192.168.40.125 osdnode3To its/etc/hosts file

Run the following commands on osdnode3. For details, refer to "Building a Ceph storage cluster under Centos6.5"
    yum install -y ntp ntpdate ntp-doc      yum install -y openssh-server     sudo useradd -d /home/leadorceph -m leadorceph    sudo passwd leadorceph    echo "leadorceph  ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/leadorceph    sudo chmod 0440 /etc/sudoers.d/leadorceph
Use the leadorceph user identity to execute the sudo mongodo command, and then modify Defaults requiretty to ults: ceph! Requiretty
Copy the password-free SSH key to osdnode3 on the dataprovider Node
ssh-copy-id leadorceph@osdnode3
Install osdnode3 ceph on dataprovider
 ceph-deploy install osdnode3<br style="background-color: inherit;" />
Create and activate processes on osdnode3 on dataprovider
ceph-deploy osd prepare osdnode3:/var/local/osd3ceph-deploy osd activate osdnode3:/var/local/osd3
Check the cluster status

It can be seen that the number of pg is too small at this time, and each OSD has only 16 on average, so you need to add some pg
    ceph osd pool set rbd pg_num 100    ceph osd pool set rbd pgp_num 100
Add a Metadata Server

To use CephFS, you must include a metadata server to create the metadata service in the following ways:

 
ceph-deploy mds create mdsnode
Add Ceph Mon

The extension of Mon is complicated, which may lead to errors in the entire cluster. Therefore, we recommend that you do not modify MON at the beginning.

The number of ceph monitors is 2n + 1 (n> = 0) (ODD), and at least three are online, as long as the number of normal nodes> = n + 1, the ceph paxos algorithm ensures the normal operation of the system. Therefore, only one of the three nodes can be attached at the same time.

Modify the ceph. conf file on dataprovider and append the information of the MON node to be added. The modification is as follows:
    [global]    auth_service_required = cephx    osd_pool_default_size = 2    filestore_xattr_use_omap = true    auth_client_required = cephx    auth_cluster_required = cephx    public_network = 192.168.40.0/24    mon_initial_members = mdsnode,osdnode1,osdnode2    fsid = 1fffdcba-538f-4f0d-b077-b916203ff698    [mon]    mon_addr = 192.168.40.107:6789,192.168.40.108:6789,192.168.40.148:6789    mon_host = mdsnode,osdnode1,osdnode2    debug_mon = 20    [mon.a]    host = mdsnode    addr = 192.168.40.107:6789    [mon.b]    host = osdnode1    addr = 192.168.40.108:6789    [mon.c]    host = osdnode2    addr = 192.168.40.148:6789


Run ceph-deploy mon create osdnode1 osdnode2

After adding a new node, You can query the result in the following method:

 
ceph quorum_status --format json-pretty
Storage and retrieval objects
The Ceph client uses the latest cluster map and CRUSH algorithm to determine how to map an object to PG, and then how to map the PG to OSD.
 
 
echo {Test-data} > testfile.txt rados put test-object-1 testfile.txt --pool=data
Add object
Check whether the object is successfully added
 
rados -p data ls
Mark the Object Location
 
ceph osd map data test-object-1

The output information may be as follows:

osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
Delete object
 
rados rm test-object-1 --pool=data

The location information may change dynamically due to changes in cluster nodes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.