[Hengtian Cloud technology sharing Series 10] openstack Block Storage Technology

Source: Internet
Author: User
Tags glusterfs

Original article: http://www.hengtianyun.com/download-show-id-101.html

Block Storage provides interfaces for block device storage. You need to attach the block storage volume to a virtual machine (or bare metal) before you can interact with it. These volumes are persistent. They can be removed from the running instance or re-attached, and the data remains intact. Instances in openstack cannot be persistent. You need to mount volume to implement persistence in volume. Cinder provides management functions for the storage block units actually required by volume.

1. standalone Block Storage 1.1 LVM

LVM is short for logical volume manager (logical volume management). It is implemented by Heinz mauelsha-gen on the Linux 2.4 kernel. LVM logically sets partitions of one or more hard disks, which is equivalent to a large hard disk. When the hard disk space is insufficient, LVM can continue to add partitions of other hard disks, this allows you to dynamically manage disk space, which is more flexible than common disk partitions.

Compared with traditional disks and partitions, LVM provides a higher level of disk storage for computers. It allows the system administrator to allocate storage space for applications and users more conveniently. Storage volumes under LVM management can be changed and removed as needed (File System Tools may need to be upgraded ). LVM also allows you to manage storage volumes by user group, allowing administrators to use more intuitive names (such as "sales" and "development ") replace the physical disk name (such as "SDA" and "SDB") to identify the storage volume.

 

Device-mapper is a general device ing mechanism that supports logical volume management. It provides a highly modular kernel architecture for block device drivers of storage resource management. LVM is implemented based on the device-mapper user program.

In the kernel, device-mapper uses a modular target driver plug-in to filter or redirect IO requests, currently, the target driver plug-in includes soft RAID, soft encryption, logical volume strip, multi-path, image, and snapshot. The entire device er mechanism consists of two parts: the device mapper driver in the kernel space, the device mapper library in the user space, and the DMSetup tool provided by the device mapper. The kernel mainly provides the mechanisms required to complete these policies. The device-mapper user space is mainly responsible for configuring specific policies and control logic, such as establishing mappings between logical devices and physical devices and how to establish these mappings, the specific filtering and redirection of IO requests are completed by the relevant code in the kernel.

LVM allows the file system to span multiple disks, so the size is not limited by the physical disk. The file system size can be dynamically expanded when the system is running. You can add new disks to the LVM storage pool. You can mirror important data to multiple physical disks. You can also easily export the entire volume group and import it to another machine.

However, the disadvantage of LVM is obvious. When a disk in the volume group is damaged, the entire volume group will be affected. Only a limited number of file system types can be reduced (ext3 does not support file system size reduction ). Because additional operations are added, the storage performance will be affected.

1.2 san

A storage area network (SAN) is a high-speed network or sub-network that provides data transmission between a computer and a storage system. A storage device is one or more disk devices used to store computer data. A San network consists of the communication structure of the network connection, the management layer of the Organization and connection, the storage components, and the computer system, so as to ensure the security and strength of data transmission.

Most San uses the SCSI protocol to transmit and communicate between servers and storage devices. By establishing different image layers on SCSI, you can connect the storage network. Common scenarios include iSCSI, FCP, and fiber channel over Ethernet.

San usually needs to be established in dedicated storage devices, while iSCSI is a SCSI ing based on TCP/IP. Through the iSCSI protocol and Linux iSCSI project, we can establish SAN storage on common PCs.

San has two major defects: cost and complexity, especially in Fiber Channel. When using optical fiber channels, the reasonable cost is about 1 Gigabit or two thousand MB, which costs about 50 thousand to 60 thousand USD. From another perspective, although the new iSCSI-based SAN solution only costs about $20 thousand to $30 thousand, its performance cannot be compared with that of the fiber channel. The major difference in prices is that iSCSI technology uses a large number of production of gibit Ethernet hardware, while fiber channel technology requires specific expensive equipment.

2. Distributed Block Storage

In the face of elastic storage requirements and performance requirements, standalone or independent San is increasingly unable to meet the needs of enterprises. Distributed block storage can provide persistent block storage devices for any physical machine or virtual machine, and manage block device creation, deletion, and attach/deattach. Supports powerful snapshot functions. Snapshots can be used to restore or create new block devices. The distributed storage system can provide Block devices with different Io performance requirements. It can meet the dynamic expansion requirements.

Currently, open-source distributed Block Storage includes CEpH, glusterfs, and sheepdog. Compared with CEpH, the biggest advantage is that the Code is short and well maintained, and the hack cost is very small. Sheepdog also has many features not supported by CEpH, such as multi-disk and cluster-wide snapshot.

In this article, Japanese NTT researchers compared sheepdog, glusterfs, and CEpH Read and Write Performance in various circumstances. This test was announced at the Japanese openstack conference in March. In most cases, sheepdog's reading and writing speed is superior to glusterfs and CEpH.

3. openstack Block Storage 3.1 Storage Architecture

3.2 cinder Introduction

Cinder is an API framework that provides block storage services in openstack. It does not implement management and actual services for Block devices, but provides a unified interface for different backend storage structures [2], different block device service vendors provide cinder driver support for integration with openstack. The backend storage can be Das, NAS, San, object storage, or distributed file system. Cinder's block storage data integrity and availability are guaranteed by backend storage.

[Hengtian Cloud technology sharing Series 10] openstack Block Storage Technology

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.