VSM (Virtual Storage Manager for Ceph) installation tutorial

Source: Internet
Author: User

Reprint annotated source, Chen Trot http://www.cnblogs.com/chenxianpao/p/5770271.html

First, installation environment

os:centos7.2

vsm:v2.1 released

Second, installation instructions

The VSM system has two roles, one is Vsm-controller and the other is vsm-agent. The vsm-agent is deployed on a ceph node, and the Vsm-controller is deployed on a separate node. Vsm-controller should also be deployed on CEPH nodes without trying.

VSM has two kinds of packages. One is the released version, one is the source version. The source version needs to be compiled. I am using the released version.

After unpacking the folder structure such as:

├──changelog

├──installrc

├──install.md

├──install.sh

├──uninstall.sh

├──license

├──manifest

│├──cluster.manifest.sample

│└──server.manifest.sample

├──notice

├──readme

└──vsmrepo

├──python-vsmclient_2.0.0-123_amd64.deb

├──packages.gz

├──vsm_2.0.0-123_amd64.deb

├──vsm-dashboard-2.0.0-123_amd64.deb

└──vsm-deploy-2.0.0-123_amd64.deb

The Mainifest folder primarily stores configuration information for each node. The INSTALLRC file configures the node for the specific installation. Vsmrepo to place the VSM dependency package, install.sh will automatically go to GitHub to get it, if it fails to get from https://github.com/01org/vsm-dependencies/ Download the v2.1 version of the dependency package and put it in the Vsmrepo folder. Get_pass.sh is installed after the completion of the Admin user password, execution can output the password.

Third, installation steps

Take four servers, i.e. one controller node and three agent nodes as an example.

1. Modify the INSTALLRC file to fill in the Controller node IP and Agent node IP.

agent_address_list= "192.168.123.21 192.168.123.22 192.168.123.23" controller_address= "192.168.123.10"

2. Then create a new four folder in the manifest folder named after four IP addresses.

├──192.168.123.10

├──192.168.123.21

├──192.168.123.22

├──192.168.123.23

├──cluster.manifest.sample

└──server.manifest.sample

3. Copy the cluster.manifest.sample to the Controller node IP folder and rename it to Cluster.manifest.

Modify the inside of Storage_class, Storage_group, addr and other information, generally you use which kind of hard drive to change to which, IP address modified to the corresponding network segment; copy Server.manifest.sample to the remaining three node IP folders. Modify the VSM_CONTROLLER_IP, role, and hard drive paths. Specific configuration information can be found in the official website configuration document.

Https://github.com/01org/virtual-storage-manager/blob/master/INSTALL.md#Configure_Cluster_Manifest

Https://github.com/01org/virtual-storage-manager/blob/master/INSTALL.md#Configure_Server_Manifest

├──192.168.123.10

│└──cluster.manifest

├──192.168.123.21

│└──server.manifest

├──192.168.123.22

│└──server.manifest

├──192.168.123.23

│└──server.manifest

├──cluster.manifest.sample

└──server.manifest.sample

4. Add the following information to the/etc/hosts file in the four server

192.168.123.10 Vsm-controller

192.168.123.21 Vsm-node1

192.168.123.22 Vsm-node2

192.168.123.23 Vsm-node3

5. Execute the following command to modify the corresponding hostname

$ sudo hostnamectl set-hostname vsm-node1

6. CentOS must add Epel source, if the download fails, the URL of the last file path to remove, find the latest epel.rpm. This is required for the installation process to download the dependency package.

Yum Install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

7. Execute the following command on the three agent nodes to format/dev/sdb and/DEV/SDC for ceph use.

$ sudo parted/dev/sdb--Mklabel GPT

Information:you need to Update/etc/fstab.

$ sudo parted-a optimal/dev/sdb--Mkpart primary 1MB 100%

Information:you need to Update/etc/fstab.

$ sudo parted/dev/sdc--Mklabel GPT

Information:you need to Update/etc/fstab.

$ sudo parted-a optimal/dev/sdc--Mkpart primary 1MB 100%

Information:you need to Update/etc/fstab.

8. Finally, execute the following command on the master node to get through SSH.

[Email protected]:~$ Ssh-keygen

Generating public/private RSA key pair.

Enter file in which to save the key (/HOME/CEPHUSER/.SSH/ID_RSA):

Enter passphrase (empty for no passphrase):

Enter same Passphrase again:

[Email protected]:~$ ssh-copy-id vsm-node1

[Email protected]:~$ ssh-copy-id Vsm-node2

[Email protected]:~$ ssh-copy-id vsm-node3

9. Execute after completion./install.sh-u root-v 2.2. The installation process is to download the installation dependent packages on the controller node before copying them to the Agent node installation. After the end of the browser input HTTPS://192.168.123.10/DASHBOARD/VSM can be accessed. Execute the get_pass.sh script to get the Admin user password. Once the cluster is created, it can be used.

Note: v2.2 version or beta version, after the installation of the Web interface is Chinese, but the pro-Test web interface will be the failure of the button submission, because do not understand the front-end development so temporarily do not know why. It is recommended that you use the v2.1 version first.

Intranet Server Installation method: When the external network is installed, modify/etc/yum.conf to change the value of Keepcache from the original 0 (to remove the package after installation) to 1 (indicating that the package is retained after installation). After the installation is complete, copy all cached packets under/var/cache/yum to the same directory in the intranet. Execute the installation command again. If there is a yum problem, modify the source of Yum to be an intranet source.

VSM (Virtual Storage Manager for Ceph) installation tutorial

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.