Building openstack I version is ten-glusterfs-based cloud Disks

Source: Internet
Author: User
Tags nfsd glusterfs gluster

Install on both the compute node and the control node.

CD/etc/yum. Repos. d/

Wget http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.3/CentOS/glusterfs-epel.repo

Yum install glusterfs-Server

Verify whether the two nodes are successfully installed.

[[Email protected] ~] # Glusterfs-V

Glusterfs 3.4.5 built on Jul 24 2014 19:11:45

Repository revision: git: // git.gluster.com/glusterfs.git

Copyright (c) 2006-2013 Red Hat, Inc.

Glusterfs comes with absolutely no warranty.

It is licensed to you under your choice of the GNU lesser

General Public License, version 3 or any later version (lgplv3

Or later), or the GNU General Public License, Version 2 (gplv2 ),

In all cases as published by the Free Software Foundation.


[[Email protected] yum. Repos. d] # glusterfs-V

Glusterfs 3.4.5 built on Jul 24 2014 19:11:45

Repository revision: git: // git.gluster.com/glusterfs.git

Copyright (c) 2006-2013 Red Hat, Inc.

Glusterfs comes with absolutely no warranty.

It is licensed to you under your choice of the GNU lesser

General Public License, version 3 or any later version (lgplv3

Or later), or the GNU General Public License, Version 2 (gplv2 ),

In all cases as published by the Free Software Foundation.

Start



[[Email protected] ~] #/Etc/init. d/gluster

Glusterd glusterfsd

[[Email protected] ~] #/Etc/init. d/glusterd start

Starting glusterd: [OK]


[[Email protected] ~] #/Etc/init. d/glusterd start

Starting glusterd: [OK]



[[Email protected] ~] # Cat/etc/hosts

127.0.0.1 localhost. localdomain localhost4 localhost4.localdomain4

: 1 localhost. localdomain localhost6 localhost6.localdomain6

192.168.33.11 linux-node1.openstack.com linux-node1

192.168.33.12 linux-node2.openstack.com linux-node2

[[Email protected] ~] # Cat/etc/hosts

127.0.0.1 localhost. localdomain localhost4 localhost4.localdomain4

: 1 localhost. localdomain localhost6 localhost6.localdomain6

192.168.33.12 linux-node2.openstack.com linux-node2

192.168.33.11 linux-node1.openstack.com linux-node1



[[Email protected] ~] # Mkdir/data/gg1

[[Email protected] ~] # Mkdir/data/gg2

Create a duplicate volume and copy the two copies.


[[Email protected] ~] # Gluster peer detach linux-node1.openstack.com

Peer detach: Success

[[Email protected] ~] # Gluster peer status

Peer status: no peers present


Add another node to the Cluster

Add node1 to the Cluster


Glusterfs is non-central and equivalent.

[[Email protected] ~] # Gluster peer probe linux-node1.openstack.com

Peer Probe: Success

[[Email protected] ~] # Gluster peer status

Number of peers: 1


Hostname: linux-node1.openstack.com

Port: 24007

UUID: af7dd374-12c0-4201-8415-fe1a1e22f851

State: peer in cluster (connected)

[[Email protected] ~] # Gluster volume create demo replica 2 linux-node1.openstack.com:/data/gg1 linux-node2.openstack.com:/data/gg2 force

Volume create: Demo: Success: please start the volume to access data


Start volume

[[Email protected] ~] # Gluster vol start demo

Volume start: Demo: Success


We can see that your data is saved in two copies.

[[Email protected] ~] # Gluster vol info

 

Volume name: Demo

Type: Replicate

Volume ID: ced081c2-a4de-4f00-8260-5fc0fb36ea9f

Status: started

Number of bricks: 1x2 = 2

Transport-type: TCP

Bricks:

Brick1: linux-node1.openstack.com:/data/gg1

Brick2: linux-node2.openstack.com:/data/gg2


What should we do if we want to use this volume? Modify the configuration file

1830gg

Volume_driver = cinder. Volume. Drivers. glusterfs. glusterfsdriver

1107

Glusterfs_shares_config =/etc/cinder/glusterfs_shares

1120

Glusterfs_mount_point_base = $ state_path/mnt



We need to unmount NFS first.

Umount 192.168.33.11:/data/nfs

View

Mount


Restart cinder services.

[[Email protected] ~] # For I in {API, scheduler, Volume}; do/etc/init. d/openstack-cinder-$ I restart; done

Stop openstack-cinder-API: [OK]

Starting openstack-cinder-API: [OK]

Stop openstack-cinder-Scheduler: [OK]

Starting openstack-cinder-Scheduler: [OK]

Stop openstack-cinder-volume: [OK]

Starting openstack-cinder-volume: [OK]

[[Email protected] ~] # Mount

/Dev/sda3 on/type ext4 (RW)

Proc on/proc type proc (RW)

Sysfs on/sys type sysfs (RW)

Devpts on/dev/PTS type devpts (RW, gid = 5, mode = 620)

Tmpfs on/dev/SHM type tmpfs (RW)

/Dev/sda1 on/boot type ext4 (RW)

None on/proc/sys/fs/binfmt_misc type binfmt_misc (RW)

SunRPC on/var/lib/nfs/rpc_pipefs type rpc_pipefs (RW)

NFSD on/proc/fs/nfsd type nfsd (RW)

192.168.33.11:/demo on/var/lib/cinder/mnt/4ac2acc000ff60654259b4a905bdecb3 type fuse. glusterfs (RW, default_permissions, allow_other, max_read = 131072)



Create cloud Disk

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/AA/wKioL1P9kbuS-EJoAAIWh8xvQ_k956.jpg "Title =" 1.png" alt = "wKioL1P9kbuS-EJoAAIWh8xvQ_k956.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/47/AA/wKioL1P9kc2j4eixAAE_KbtOQI4026.jpg "Title =" 2.png" alt = "wkiol1p9kc2j4eixaae_kbtoqi4026.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/47/AA/wKioL1P9kdnCj9egAAJWqJtjQ_0280.jpg "Title =" 3.png" alt = "wkiol1p9kdncj9egaa1_qjtjq_0280.jpg"/>

Let's see where all cloud disks exist.

[[Email protected] ~] # Cat/etc/cinder/glusterfs_shares

192.168.33.11:/demo

[[Email protected] ~] # Mount

/Dev/sda3 on/type ext4 (RW)

Proc on/proc type proc (RW)

Sysfs on/sys type sysfs (RW)

Devpts on/dev/PTS type devpts (RW, gid = 5, mode = 620)

Tmpfs on/dev/SHM type tmpfs (RW)

/Dev/sda1 on/boot type ext4 (RW)

None on/proc/sys/fs/binfmt_misc type binfmt_misc (RW)

SunRPC on/var/lib/nfs/rpc_pipefs type rpc_pipefs (RW)

NFSD on/proc/fs/nfsd type nfsd (RW)

192.168.33.11:/demo on/var/lib/cinder/mnt/4ac2acc000ff60654259b4a905bdecb3 type fuse. glusterfs (RW, default_permissions, allow_other, max_read = 131072)

[[Email protected] ~] # Ll/var/lib/cinder/mnt/4ac2acc000ff60654259b4a905bdecb3

Total usage 4

-RW-1 Root 1073741824 August 27 15:58 volume-ca55ee76-956c-4fb6-96ce-d0a5d1f45aa9

[[Email protected] ~] # Ll/data/gg1/

Total usage 4

-RW-2 root Root 1073741824 August 27 15:58 volume-ca55ee76-956c-4fb6-96ce-d0a5d1f45aa9


[[Email protected] ~] # Ll/data/gg2/

Total usage 4

-RW-2 root Root 1073741824 August 27 15:58 volume-ca55ee76-956c-4fb6-96ce-d0a5d1f45aa9

Three copies in total

Attach to a VM instance


650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/47/A8/wKiom1P9kmGwR_SRAAKDmxVmfgs454.jpg "Title =" 1.png" alt = "wkiom1p9kmgwr_sraakdmxvmfgs454.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M00/47/A8/wKiom1P9ksnRS2bPAAEFICE8brE125.jpg "Title =" 2.png" alt = "wkiom1p9ksnrs2bpaaefice8bre125.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/AA/wKioL1P9lBGi2CFmAAJiKTlziVY428.jpg "Title =" 4.png" alt = "wkiol1p9lbgi2cfmaajiktlzivy428.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/AA/wKioL1P9lJigmoHfAAOZacWcmPw271.jpg "Title =" 1.png" alt = "wkiol1p9ljigmohfaaozacwcmpw271.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/A8/wKiom1P9lV3CGi0rAAIshzyid10839.jpg "Title =" 2.png" alt = "wkiom1p9lv3cgi0raaishzyid10839.jpg"/>

Next, we will format this hard disk.

Sudo fdisk/dev/vdb

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/A8/wKiom1P9lXDytfQnAAJIylfvj-M662.jpg "Title =" 3.png" alt = "wKiom1P9lXDytfQnAAJIylfvj-M662.jpg"/> 650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/AA/wKioL1P9lqHxV0x2AAG7mfexsy8860.jpg "Title =" 5.png" alt = "wkiol1p9lqhxv0x2aag7mfexsy8860.jpg"/>

Sudo mkfs. ext3/dev/vdb1

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M02/47/A8/wKiom1P9llmBN9eqAAHkxW6bLtc495.jpg "Title =" 1.png" alt = "wkiom1p9llmbn9eqaahkxw6bltc495.jpg"/>

650) This. width = 650; "src =" http://s3.51cto.com/wyfs02/M01/47/AA/wKioL1P9l8rx0ftPAABfVGdo6kw096.jpg "Title =" 1.png" alt = "wkiol1p9l8rx0ftpaabfvgdo6kw096.jpg"/>

The local file system is the best.

10g Nic


This article is from the "8055082" blog and will not be reproduced!

Building openstack I version is ten-glusterfs-based cloud Disks

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.