Ceph and OpenStack Integration (cloud-only features available for cloud hosts only)

Source: Internet
Author: User
Tags gluster

1. Ceph integration with OpenStack (cloud-only features available for cloud hosts)
    • Created: Linhaifeng, Last modified: about 1 minutes ago

  1. To deploy a cinder-volume node. Possible error during deployment (please refer to the official documentation for the deployment process)
    Error content:
                     2016-05-25 08:49:54.917 24148 TRACE Cinder runtimeerror:could Not bind to 0.0.0.0:8776 after trying for seconds
    Problem Analysis:
    runtimeerror-> Access violation, this situation is due to 0.0.0.0 : 8776 port conflict problem, if you speak Cinder-api and Cinder-schduler,cinder-volume deployed on the same machine
    There will be a problem with this port conflict, The situation I encountered is that both the port of Haproxy and the port of Cinder-api are 8776
    Workaround:
    vim/etc/cinder/cinder.conf
    Adds the following two lines:
    osapi_volume_listen=172.16.209.17
                                            osapi_volume_listen_port= 8776

  2. Create a pool on the Ceph monitor node, create an account with RWX permissions for the pool, copy the key file for the account to the Cinder-volume node and compute nodes that need to use the pool. Send only the configuration file to the Cinder-volume node (the compute node wants to get Ceph cluster information from the Cinder-volume node, so no configuration file is required )

    1. Create Storage pool Volume-pool, remember the name of the pool, both cinder-volume and compute nodes need to specify this pool in the configuration file

      1. Ceph OSD Pool Create volume-pool 128 128

    2. Create an account to access Volume-pool This pool, note: Below my marked red section  , the account name must be client.  The beginning (Client.cinder where cinder is your account name, the client represents your role is clients,), the exported account key file naming rules for (This file must be placed in the/etc/ceph directory of the Ceph client)

      1. Ceph Auth Get-or-create CLIENT.CINDER   Mon ' Allow R '   osd allow ' Class-read object_prefix rbd_children, allow rwx pool=volume-pool '-o/etc/ceph/

    3. Send the configuration file of the Ceph cluster and the key file created in the previous step to the client (Cinder-volume node, compute node), note: Be sure to copy these two files to the/etc/ceph directory of the Ceph client ( There are many default configurations in cinder.conf, looking for the path to the Ceph configuration file default/et/ceph)

      1. Scp-r/etc/ceph/ceph.conf cinder-volume Node Ip:/etc/ceph

      2. Scp-r/etc/ceph/ceph.client.cinder.keyring cinder-volume Node Ip:/et/ceph

      3. Scp-r/etc/ceph/ceph.conf Compute Node Ip:/etc/ceph

      4. Scp-r/etc/ceph/ceph.client.cinder.keyring Compute Node Ip:/et/ceph

  3. Configuring the Cinder-volume Node

    1. 650) this.width=650; "height=" "class=" Confluence-embedded-image "src=" http://wiki.ky.com/download/attachments/ 3421080/image2016-5-25%2011%3a1%3a20.png?version=1&modificationdate=1464145282000&api=v2 "alt=" Image2016-5-25%2011%3a1%3a20.png?version "/>

    2. Yum Install Ceph-common-y

    3. Modify/etc/cinder/cinder.conf:
      [DEFAULT]
      Volume_driver = Cinder.volume.drivers.rbd.RBDDriver
      Storage_availability_zone=blockstrage03-ceph
      Rbd_pool = Volume-pool
      rbd_ceph_conf = /etc/ceph/ceph.conf
      Rbd_flatten_volume_from_snapshot = False
      Rbd_max_clone_depth = 5
      Rbd_store_chunk_size = 4
      Rados_connect_timeout =-1
      Glance_api_version = 2
      Rbd_user = Cinder

    4. /etc/init.d/openstack-cinder-volume restart

    5. Tail-f/var/log/cinder/volume.log
      Error:Unable to update stats, RBDDriver-1.1.0 driver is uninitialized.
      Problem Analysis: Cinder-volume cannot connect to Ceph cluster causing driver not to initialize properly, see 2->b->i and 2->c red parts
      Workaround: Rename your file with reference to the naming rules indicated in the 2->b-i and 2->c Red section, and the file's storage path on the client, restart the Openstack-cinder-volume service

  4. Configure compute node, upgrade qemu-* package, restart LIBVIRTD, make key file import Libvirt

    1. 650) this.width=650, "width=", "class=" Confluence-embedded-image "src=" http://wiki.ky.com/download/attachments/ 3421080/image2016-5-25%2010%3a57%3a0.png?version=1&modificationdate=1464145022000&api=v2 "alt=" Image2016-5-25%2010%3a57%3a0.png?version "/>

    2. Yum Install Ceph-common-y

    3. Note: The current production environment is based on the centos6.5 build OpenStack I version, openstack-nova-compute-2014.1.5-1.el6.noarch->libvirt-python-0.10.2-54- >libvirt 0.10.2-> Qemu 0.12.1, and this version of QEMU does not support the RBD protocol, Openstack-nova-compute-2014.1.5-1 only supports ibvirt-python-0.10.2-54, if you upgrade
      Libvirt then the corresponding Libvirt-python also to upgrade, and then openstack-nova-compute to upgrade, in fact, you only need to use the same version of QEMU with CEPH support package to replace the original QEMU version.

    4. View commands (This should be the result of the upgrade)

      1. [Email protected] ~]# Rpm-qa |grep qemu
        Qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
        Qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
        Gpxe-roms-qemu-0.9.7-6.14.el6.noarch
        Qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
        Qemu-guest-agent-0.12.1.2-2.415.el6.3ceph.x86_64

      2. [Email protected] ~]# Virsh version
        Compiled against Library:libvirt 0.10.2
        Using Library:libvirt 0.10.2
        Using Api:qemu 0.10.2
        Running Hypervisor:qemu 0.12.1

      3. [Email protected] ~]#/usr/libexec/qemu-kvm-drive format=?
        Supported Formats:raw Cow qcow VDI vmdk cloop dmg bochs VPC vvfat qcow2 QED VHDX parallels NBD blkdebug host_cdrom Host_f Loppy host_device file Gluster gluster gluster gluster RBD

      4. [Email protected] ~]#/usr/libexec/qemu-kvm-m?
        Supported Machines is:
        PC RHEL 6.5.0 pc (alias of rhel6.5.0)
        rhel6.5.0 RHEL 6.5.0 PC (default)
        rhel6.4.0 RHEL 6.4.0 PC
        rhel6.3.0 RHEL 6.3.0 PC
        rhel6.2.0 RHEL 6.2.0 PC
        rhel6.1.0 RHEL 6.1.0 PC
        rhel6.0.0 RHEL 6.0.0 PC
        rhel5.5.0 RHEL 5.5.0 PC
        rhel5.4.4 RHEL 5.4.4 PC
        rhel5.4.0 RHEL 5.4.0 PC
        If the new virtual machine encounters the above prompting information after the upgrade, you can modify the virtual machine's XML configuration file, machine= ' rhel6.5.0 ' or simply delete machine, then use Virsh define the VM name. XML
        <os>
        <type arch= ' x86_64 ' machine= ' rhel6.5.0 ' >hvm</type>
        <boot dev= ' HD '/>
        <smbios mode= ' sysinfo '/>
        </os>

    5. Upgrade Method:          supported Formats:raw cow qcow VDI vmdk cloop DMG Bochs VPC Vvfat qcow2 QED VHDX parallels NBD blkdebug host_cdrom host_floppy host_device file gluster gluster gluster glus ter

    6. Link: or http://apt-mirror.sepia.ceph.com/centos6-qemu-kvm/

    7. Download packages beginning with qemu-, place them on local path

    8. Switch to Local path execute RPM-UVH qemu-*--force

    9. Service LIBVIRTD R Estart

    10. /usr/libexec/qemu-kvm-drive format=?

    11. Make Secret.xml file, note that the red part must be the name specified by 2->b->i

      1. Cat > Secret.xml <<eof
        <secret ephemeral= ' no ' private= ' no ' >
        <usage type= ' Ceph ' >
        <name>Client.cinder secret</name>
        </usage>
        </secret>
        Eof

    12. Virsh Secret-define--file Secret.xml

    13. Find the UUID of the Client.cinder user you just defined

      1. Virsh secret-list

    14. Find the secret key for Ceph-supplied key file

      1. Cat/etc/ceph/ceph.client.cinder.keyring

    15. Virsh Secret-set-value $ (virsh secret-list |grep client.cinder |awk ' {print $} ')--base64 $ (Cat/etc/ceph/ceph.clien T.cinder.keyring |awk ' Nr==2{print $} ')

    16. Vim/etc/nova/nova.conf
      Rbd_user=cinde
      RBD_SECRET_UUID=DDA10A4E-C03C-B029-EF07-CE86E7A07BDD ------------------> Value: Virsh secret-list |grep client.c Inder |awk ' {print $}

    17. /etc/init.d/openstack-nova-compute Restart

Extension: Cinder provides cloud disks for cloud hosts, mapping relationships in compute nodes

Analysis: After adding a cloud disk to the cloud host, enter the computer point where the cloud host resides

Enter the cloud host fdisk-l, you can find a new hard disk

Enter the compute node where the cloud host is located and find a new hard drive

The problem is that the cinder service is actually to map the hard disk to the compute nodes, read and write requests are given to the compute node to do, and then the compute node is responsible for mapping the hard disk to their own virtual cloud host

In the COMPUTE node:

1.

[email protected] by-path]# ll /dev/disk/by-path/ip-10.5.0.20\:3260-iscsi-iqn.2010-10.org. openstack\ : Volume-26f04424-7ddb-4756-9648-e023b84bcd5e-lun-1

lrwxrwxrwx. 1 root root 9 May 18:43/dev/disk/by-path/ip-10.5.0.20:3260-iscsi-iqn.2010-10.org.openstack: Volume-26f04424-7ddb-4756-9648-e023b84bcd5e-lun-1. /.. /sdb

2.

[email protected] by-path]# cat/etc/libvirt/qemu/instance-00000031.xml |grep disk
<disk type= ' file ' device= ' disk ' >
<source file= '/var/lib/nova/instances/db14cd53-b791-4f0b-91cd-0e160dd7b 794/disk '/>
</disk>
<disk type= ' block ' device= ' disk ' >
<source dev='/dev/disk/by-path/ip-10.5.0.20:3260-iscsi-iqn.2010-10. Org.openstack:volume-26f04424-7ddb-4756-9648-e023b84bcd5e-lun-1 '/>
</disk>


This article is from "A Good person" blog, please be sure to keep this source http://egon09.blog.51cto.com/9161406/1783314

Ceph and OpenStack Integration (cloud-only features available for cloud hosts only)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.