Howto install CEpH on fc12 and FC install CEpH Distributed File System

Source: Internet
Author: User
Tags stop script ssh access dmesg
Document directory
  • 1. Design a CEpH Cluster
  • 3. Configure the CEpH Cluster
  • 4. Enable CEpH to work
  • 5. Problems Encountered during setup
  • Appendix 1 modify hostname
  • Appendix 2 password-less SSH access

CEpH is a relatively new Distributed File System completed by the USSC storage team. It is a Network File System Based on OSD (Object Storage Device). Relevant articles are published in osdi '06, msst03, 04 and so on. recently, some clients of the CEpH file system have entered Linux kernel 2.6.34.
Recently, it took some time to use the vmwarevm to build a CEpH. Now we will write down the setup process and the problems encountered and solved in the process here.

1. Design a CEpH Cluster

CEpH is mainly divided into four parts: Client/monitor/MDS/OSD
The client export a POSIX file system interface, which should be called by a program and connected to monitor/MDS/OSD for metadata and data interaction. The most primitive client uses fuse for implementation, now it is written into the kernel, and a CEpH needs to be compiled. ko kernel module can be used.
Monitor: manages the entire cluster and generates a Network File System for the client export. The client can mount-T CEpH monitor_ip:/mount_point to the CEpH file system. according to the official statement, three monitors can ensure the reliability of the cluster. corresponds to a daemon program, cmon
MDS: metadata server. CEpH can contain multiple MDs to form a metadata server cluster, which involves dynamic directory segmentation in CEpH for load balancing. Corresponding daemon: cmds
OSD: OSD simulator. It encapsulates a local file system with an interface that provides external object storage. here, the local file systems can be ext2 and ext3, but CEpH thinks these file systems cannot adapt to the special OSD access mode; they have previously implemented ebofs; now CEpH is switched to btrfs (Google btrfs ). daemon: cosd.

CEpH supports hundreds or even more nodes. In that case, four modules are best distributed on different machines, or all of them can be deployed on the same machine.
In my test environment, we use four virtual machines to build one client, monitor/MDS on one node, and one OSD for each of the other two nodes.
The specific configuration is as follows:

Hostname Ip_addr Role
Ceph_client 192.168.233.180 CEpH Client
Ceph_mds 192.168.233.182 Monitor & MDS
Ceph_osd 192.168.233.181 OSD
Ceph_osd1 192.168.233.183 OSD

CEpH also requires operations on four nodes
1. modify their respective hostnames and access each other through hostnames (see appendix 1)
2. access each other over SSH without entering the password (see Appendix 2 for specific methods );

2. Install CEpH on each node
2.1 Client

The client mainly needs CEpH. the Ko module has two methods: In-tree and out-tree. The former downloads the latest Linux kernel, modifies the compilation options, and compiles the program; the latter downloads the CEpH client source code and compiles it outside the kernel.
Method 1:

$ Git clone git: // git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
$ CD CEpH-Client
$ Make menuconfig
# Search for CEpH. You can find two options about CEpH. Select this option. The method for compiling the kernel is not described here. Run the command directly.
$ Make & make modules & make modules_install & make install & reboot

Method 2:

# Download source code
$ Git clone git: // ceph.newdream.net/git/ceph-client-standalone.git
$ Git branch master-backport origin/master-backport
$ Git checkout master-backport
# Compile
$ Make or make kerneldir =/usr/src /... # The former indicates that the kernel is currently in use, and the latter indicates other paths.
# CEpH. Ko will be generated after compilation is successful
$ Make install
$ Modprobe CEpH or inmod CEpH. Ko

2.2 CEpH installation on other nodes

The Code of other nodes is user-oriented. Download the newer code on the CEpH official website (the latest version is 0.20.1). Compile the code in a conventional way.

3. Configure the CEpH Cluster

Except the client, all other nodes need a configuration file, which must be exactly the same.

3.1 CEpH. config

This file is located under/etc/CEpH. If the prefix is not modified in./configure, it should be in/usr/local/etc/CEpH.

[Root @ ceph_mds CEpH] # Cat CEpH. conf

;

; Sample CEpH. conf file.

;

; This file defines cluster membership, the various locations

; That CEpH stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the start/stop script will

; Verify that it matches the hostname (or else ignore it). If it is

; Not defined, it is assumed that the daemon is intended to start on

; The current host (e.g., In a setup with a startup. conf on each

; Node ).

; Global

[Global]

; Enable Secure Authentication

; Auth supported = cephx

; Monitors

; You need at least one. You need at least three if you want

; Tolerate any node failures. Always create an odd number.

[Mon]

Mon DATA =/data/MON $ ID

; Some minimal logging (Just message traffic) to aid debugging

Debug MS = 1

[Mon0]

Host = ceph_mds

Mon ADDR = 192.168.233.182: 6789

; MDS

; You need at least one. Define two to get a standby.

[MDS]

; Where the MDS keeps it's secret encryption keys

Keyring =/data/keyring. $ name

[MDS. Alpha]

Host = ceph_mds

; OSD

; You need at least one. Two if you want data to be replicated.

; Define as same as you like.

[OSD]

Sudo = true

; This is where the btrfs volume will be mounted.

Osd data =/data/OSD $ ID

; Ideally, make this a separate disk or partition. A few GB

; Is usually enough; more if you have fast disks. You can use

; A file under the OSD data dir if need be

; (E.g./data/OSD $ ID/Journal), but it will be slower than

; Separate disk or partition.

; OSD journal =/data/OSD $ ID/Journal

[Osd0]

Host = ceph_osd

Btrfs Devs =/dev/sdb1

[Osd1]

Host = ceph_osd1

Btrfs Devs =/dev/sdb1

; Access control

[Group everyone]

; You probably want to limit this to a small or a list

; Hosts. Clients are fully trusted.

ADDR = 0.0.0.0/0

[Mount/]

Allow = % Everyone


3.2 fetch_config script

This file is also located in the directory where CEpH. conf is located. It is used to copy the CEpH. conf file to each node in the cluster.

[Root @ ceph_mds CEpH] # Cat fetch_config

#! /Bin/sh

Conf = "$1"

# Fetch CEpH. conf from some remote location and save it to $ Conf.

##

# Make sure this script is executable (chmod + x fetch_config)

##

# Examples:

##

# From a locally accessible file

# Cp/path/to/CEpH. conf $ Conf

# From a URL:

# Wget-Q-o $ conf http://somewhere.com/some/ceph.conf

# Via SCP

# SCP-I/path/to/id_dsa user @ host:/path/to/CEpH. conf $ Conf

SCP root @ ceph_mds:/usr/local/etc/CEpH. conf $ Conf

In this file, we use the SCP method. In addition, we can also use NFS to share the CEpH. conf file. In short, the purpose is to use the same CEpH. conf file in the entire cluster.


3.3/etc/init. d/CEpH script

This script will generate an init-CEpH file in src/when compiling CEpH, which is generated by the init-ceph.in template.

If you need to start the CEpH cluster automatically, copy the script to the/etc/init. d/directory and run the chkconfig command to add the service.

This service should only be installed on the monitor terminal.

4. Enable CEpH to work

 

4.1 create a CEpH File System

Run

$ Mkcephfs-C/etc/CEpH. conf -- allhosts -- mkbtrfs-k/etc/CEpH/keyring. Bin

It will automatically perform corresponding configuration on each node according to the configuration in CEpH. conf.

4.2 start the CEpH File System

Run

$/Etc/init. d/CEpH-a start

4.3 The CEpH file system is mounted on the client

First load CEpH. Ko

$ Modprobe CEpH. Ko

$ Mount-T CEpH ceph_mds: // MNT/CEpH

This CEpH cluster is built.

5. Problems Encountered during setup

Of course, many problems have also been encountered in this process. For example, when the CEpH file system is mounted on the client, the mounting fails, and dmesg outputs the following information:

[Root @ ceph_client ~] # Dmesg-C

CEpH: loaded (MON/MDS/OSD proto 15/32/24, osdmap 5/5 5/5)

CEpH: mon0 192.168.233.182: 6789 connection failed

CEpH: mon0 192.168.233.182: 6789 connection failed

CEpH: mon0 192.168.233.182: 6789 connection failed

CEpH: mon0 192.168.233.182: 6789 connection failed

CEpH: mon0 192.168.233.182: 6789 connection failed

CEpH: mon0 192.168.233.182: 6789 connection failed

Then I run the netstat command on the monitor to check whether port 6789 has been enabled (netstat-ANP, which can be seen in numbers and corresponding processes) and find that the port has been opened, if the (-N) option is not available, port 6789 is displayed with the name of (SMC-https.

Scan the client to check whether port 6789 is enabled. The command is

[Root @ ceph_client ~] # NMAP-P 6789 192.168.233.182

Starting NMAP 5.00 (http://nmap.org) at 2010-05-25 CST

Interesting ports on ceph_mds (192.168.233.182 ):

Port State Service

6789/tcp filtered ibm-db2-admin

MAC address: 00: 0C: 29: 41: D2: 12 (VMware)

NMAP done: 1 IP address (1 host up) scanned in 0.14 seconds

The port is found to be in the filtered status, and there is no firewall... view iptables services... disable the iptables service... then OK. it took a long time to find out the error. however, netstat and NMAP are two very useful commands.

For example, when you create and start a CEpH file system, go to OSD to create a btrfs file system.

/Sdb1/data/osd1"

Scanning for btrfs filesystems

Failed to read/dev/fd0

Starting CEpH osd1 on ceph_osd1...

This failed does not matter. a script in it needs to scan all devices on the OSD node to see which disk is created with btrfs and which is a SATA hard disk in my VM, no/dev/fd0. I don't know why this device exists.

In short, it is not an error.

6 References

1. http://www.ece.umd.edu /~ Posulliv/CEpH/cluster_build.html

2. http://ceph.newdream.net/wiki/Main_Page

3. http://faketjs.blogspot.com/2010/04/ceph-install-on-fedora.html # This GFW screen, how do you see, Google it

Appendix 1 modify hostname

Google Baidu

This mainly involves the/etc/sysconfig/network file, the hostname command, and the/etc/hosts file.

Appendix 2 password-less SSH access

The principle is the public-private key mechanism. If I want to allow others to access me, I need to send my own public key to others so that they can access me with this public key.

$ Ssh-keygen-d

# This command will be executed in ~ /. Generate several files under ssh. Here id_dsa.pub is used, which is the public key of the node (a) And then adds the content to the peer node (B)

#~ In the authorized_keys file under the/. Ssh/directory, if there is no such file, create one. In this way, log on to Node B without the password ssh.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.