Install Ceph with Ceph-deploy and deploy cluster __ cluster

Source: Internet
Author: User
Tags file copy prepare aliyun
Deployment Installation

Regarding the installation ceph the whole process encounters the question, as well as the reliable solution, the personal test is effective, does not represent the general colleague's viewpoint.
I am using the server, so I did not engage in any user issues. The machine is centOS7.3. The Ceph version I installed is jewel and currently uses only 3 nodes.
Node IP naming role
10.0.1.92 e1092 Mon
10.0.1.93 e1093 Mon,osd
10.0.1.94 e1094 Mon,osd
Step one: Prepare for work (the following work is performed on all nodes)
First, configure the Yum Source:
It is important to note that the Ceph installation process also requires Third-party component dependencies, some of which are not available in official sources such as the CentOS Yum.repo base (such as LEVELDB), so readers will have a certain chance of encountering various dependency anomalies during the installation process. Prompts for the installation of XXX third party components (for example, to install liblevel.so first) are required. Although we will introduce the CEPH-assisted Deployment Tool later, Ceph-deploy's job is to install the management component through the Yum command, but since there are no third-party components to rely on CentOS Yum.repo base official source, So once you encounter a similar component dependency problem the installation process cannot continue automatically. To solve this problem, the introduction of CentOS's Third-party extension source Epel is recommended in this example. (I've been on this for a long time before I come out).
Introduce third party extension sources first:

# wget-  o  /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

To configure the Ceph source:

# vim  /etc/yum.repos.d/ceph.repo   # Add Ceph source, and put the following into
[Ceph]
name=ceph
baseurl=http:// mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority=1

[Ceph-noarch]
name= Cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
Priority=1

[Ceph-source]
name=ceph Source Packages
Baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/srpms
enabled=0
gpgcheck=1
type=rpm-md
GPGKEY=HTTP://MIRRORS.163.COM/CEPH/KEYS/RELEASE.ASC
Priority=1

Update Source:

# yum Makecache
# Yum Update

Install Ceph (I was installed in each node, but not with the Ceph-deploy one-click installation, personal feeling installed more difficult to make mistakes):

# yum Install-  y  ceph

View Ceph version:

# ceph-v
ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0)

Close SELinux:

# sed-i  ' s/selinux=.*/selinux=disabled/'  /etc/selinux/config
# setenforce  0
setenforce:selinux is disabled

Shutdown Firewall Firewalld:

# Systemctl Stop firewalld
# systemctl Disable FIREWALLD

Installing an NTP server
To ensure consistent time for each server, install an NTP server

# yum install-y NTP ntpdate Ntp-doc

Access: HTTP://WWW.POOL.NTP.ORG/ZONE/CN, get China Common time synchronization server. Such as:
server 0.cn.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org

Add these three servers to/etc/ntp.conf and use # to annotate the files in the original:
server 0.centos.pool.ntp.org Iburst
server 1.centos.pool.ntp.org Iburst
server 2.centos.pool.ntp.org Iburst
server 3.centos.pool.ntp.org Iburst
Then execute the following command to synchronize and start the NTP service manually from the server:

# ntpdate 0.cn.pool.ntp.org
# hwclock-w #
Systemctl enable Ntpd.service
# Systemctl Start Ntpd.service

Install SSH service:

# yum Install Openssh-server

The second step, the preparation is done, now began to deploy the Ceph cluster.

Note: The following operations are performed at the Admin-node node, in this article, because Admin-node is shared with e1093, so it can be performed on e1093
Modify/etc/hosts

# vim/etc/hosts

10.0.1.92 e1092
10.0.1.93 e1093
10.0.1.94 e1094
Generate the SSH key pair and copy to each node

# ssh-keygen #
Scp-copy-id e1092
# scp-copy-id e1093
# Scp-copy-id e1094

Install Deployment Tools Ceph-deploy

# yum Install ceph-deploy
# Ceph-deploy  --version

Create a cluster,
Here you first create a directory, because some configuration files are generated during the execution of Ceph-deploy. In the future, all files generated by the execution of the Ceph-deploy command are in this directory.

# mkdir/home/my-cluster
# CD My-cluster

Deploy the new monitor node (I have e1093, e1093, e1094 as Mon nodes):

# Ceph-deploy New e1092 e1093 e1094

To view the files generated in the My-cluster directory:

# ls 
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

To modify a configuration file:

# vim ceph.conf
mon_initial_members = e1092, e1093, e1094 mon_host
= 10.0.1.92,10.0.1.93,10.0.1.94
auth_ cluster_required = none
auth_service_required = none
auth_client_required = None
OSD Pool Default size = 2
  public Network = 10.0.1.0/24

Give a description of the parameters:
The first 5 items are automatically generated, but I have modified auth_cluster_required,auth_service_required,auth_client_required to none, the original default is Cephx, which means to pass the certification, Here I do not need authentication, so set to none.
The OSD pool default size is the number of replicas, I only configure two copies, so set to 2.
Public network is a common network, is the network between the OSD communication, the proposed setting, if not set, the following may execute the command when there is warning message, this parameter is actually your Mon node IP last item to 0, then add/24. For example, my node IP is the 10.0.1.8* series, so my public network is 10.0.1.0/24.
Deploy monitors and obtain key keys, which will generate several keys in the My-cluster directory.

# Ceph-deploy  --overwrite-conf Mon  create-initial

Here I posted a part of the output information as a reference, see the last part of the information to indicate success.
To view the files generated in the My-cluster directory:

# ls
ceph.bootstrap-mds.keyring 
ceph.bootstrap-rgw.keyring  
ceph.conf             
ceph.mon.keyring
Ceph.bootstrap-osd.keyring  
ceph.client.admin.keyring   
ceph-deploy-ceph.log

To view the status of a cluster:

# ceph-s


Next, deploy the OSD:
Because there is not enough disk, I use the folder, I use the folder, if disk, online tutorials more:
Execute on e1092 e1093 e1094:

# mkdir  /var/local/osd1  
# chmod  777-  R  /VAR/LOCAL/OSD1

The following are performed on nodes that have Ceph-deploy:
To prepare the OSD:

# Ceph-deploy OSD Prepare E1092:/VAR/LOCAL/OSD1 E1093:/VAR/LOCAL/OSD1 E1094:/VAR/LOCAL/OSD1

To activate the OSD:

# Ceph-deploy OSD Activate E1092:/VAR/LOCAL/OSD1 E1093:/VAR/LOCAL/OSD1 E1094:/VAR/LOCAL/OSD1

Look at the cluster status again, and there should be no problem. problems and solutions during installation:

1, about the Yum source problem
It is recommended to use domestic sources, such as:
NetEase Mirror Source Http://mirrors.163.com/ceph
Ari Image Source Http://mirrors.aliyun.com/ceph
USTC Mirror Source Http://mirrors.ustc.edu.cn/ceph
Baode Image Source Http://mirrors.plcloud.com/ceph
Take Jewel as an example:
Http://mirrors.163.com/ceph/rpm-jewel/el7
Http://mirrors.163.com/ceph/keys/release.asc
2, on the implementation of ceph-deploy–overwrite-conf Mon create-initial problems (the most likely to be the problem of the feeling that this command)
(1) Emergence of admin-socket problems
When host name/etc/hostname and/etc/hosts the name of the host is not the same, for example, I installed Ceph-deploy used by the host is 10.0.1.90, I named the host 10.0.1.90 e1090, that is,/etc/ Hostname is set to e1090, however I give the host the name of Mon in/etc/hosts. This problem can then occur as follows:

To check this problem should pay attention to the error above info information: Running command:ceph–cluster=ceph–admin-daemon/var/run/ceph-mon.mon.asok mon_status
This message is followed by an error, which means it is possible that the command it executed was unsuccessful and first entered the directory to see if it was available:

# ls  /var/run/
ceph-mon.e1090.asok

Found my file under this directory is Ceph-mon.e1090.asok, rather than info in the Ceph-mon.mon.asok, so immediately modify the/etc/hosts, rename the name to hostname consistent with the name, Then the problem was solved. Also, this problem may occur if the public netmork is not set in the ceph.conf configuration file, so it is best to set it up.
(2) [Warnin] monitor e1090 does not exist in Monmap

If this problem occurs after the problem is resolved above (1), it means that your Mon machine is down and I couldn't believe it at first. (Nani. I just deployed it down. Can actually look at the information above, found Mons this item in the addr information is 0.0.0.0:0/1, here know should be down, and pay attention to see name this item or before the name Mon, although the first problem solved, but still did not succeed, so simply changed the node , do not use this node.
This can be done on its own by the other nodes as a Mon node to try, if the other node succeeds on the node is not successful, it must be down, with ceph-s view of the cluster state can also see it down.
(3) failed to connect to host:e1092,e1093, e1094

See warnin information, found that: no Mon key found in host:e1092, the following warnin is the same Of At this time can see under the My-cluster directory there is no generate key, this error should not be generated, the solution is to my-cluster directory ceph-mon.keying file copy to all nodes/var/lib/ceph/mon/ ceph-$hostname directory. That is,

# cp/root/cluster/ceph.mon.keyring/var/lib/ceph/mon/ceph-1093/keyring
# scp/home/chenjuan/my-cluster/ Ceph.mon.keyring e1092:/var/lib/ceph/mon/ceph-e1092/keyring
# scp/home/chenjuan/my-cluster/ceph.mon.keyring E1094:/var/lib/ceph/mon/ceph-e1094/keyring

Execute ceph-deploy–overwrite-conf create-initial again, it should be successful.
Resources:
Distributed File System ceph:http://www.bijishequ.com/detail/370666?p=
Ceph distributed storage Cluster manually installs the Deployment Guide on CentOS 7.1: http://bbs.ceph.org.cn/question/138

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.