Openstack storage: explains how to use NFS as the Cinder's backend storage, openstackcinder

Source: Internet
Author: User
Tags nfsd

Openstack storage: explains how to use NFS as the Cinder's backend storage, openstackcinder
Brief description of NFS service

NFS is short for Network File System, that is, Network File System. An agreement for the use of distributed file systems, developed by Sun, was published on April 9, 1984. The function is to allow different machines and different operating systems to share individual data with each other through the network, so that applications can access data on server disks through the network on the client, it is a way to share disk files between Unix-like systems.

 

The basic principle of NFS is "allow different clients and servers to share the same file system through a group of RPC", which is independent of the operating system, allows different hardware and operating system systems to share files.

 

NFS depends on the RPC protocol during file transfer or information transfer. Remote Procedure Call (RPC) is a mechanism that enables the client to execute programs in other systems. NFS itself does not provide information transmission protocols and functions, but NFS allows us to share data over the network, because NFS uses some other transmission protocols. These transmission protocols use this RPC function. NFS itself is a program that uses RPC. Or NFS is also an rpc server. Therefore, the RPC service must be started wherever NFS is used, whether it is an nfs server or an nfs client. In this way, the SERVER and CLIENT can implement the corresponding program port through RPC. We can understand the relationship between RPC and NFS in this way: NFS is a file system, while RPC is responsible for information transmission.

 

NFS server

 

NFS Server Installation

yum -y install rpcbind nfs-utils

NFS server configuration

The configuration on the NFS server is relatively simple. It only involves modifications to the/etc/exports file. The configuration content is as follows:

/nfs/shared192.168.40.0/255.255.255.0(rw,sync)

The above configuration indicates "192.168.40.0/255.255.255.0". Users of this network segment can mount the/NFS/shared directory on the nfs server. After mounting, they have read and write permissions. Because they do not specify the method for compressing user permissions, therefore, even if you log on as the root user, it will be downgraded to nobody.

Start NFS service
service rpcbind startservice nfs startservice nfslock startchkconfig rpcbind onchkconfig nfs onchkconfig nfslock on


NFS service verification

The server uses the showmount command to query the NFS sharing status.

# Showmount-e // by default, you can view your shared services, provided that the DNS can resolve yourself, otherwise it is easy to report an error # showmount-a // display the directory information that has been connected to the client


The client uses the showmount command to query the NFS sharing status.

# Showmount-e NFS server IP Address

NFS system daemon
  • Nfsd: it is a basic NFS Daemon. Its main function is to manage whether the client can log on to the server;
  • Mountd: it is the RPC installation daemon. Its main function is to manage NFS file systems. After the client successfully logs on to the NFS server through nfsd, it must pass the File Permission verification before using the files provided by the NFS service. It reads the NFS configuration file/etc/exports to compare the client permissions.
  • Portmap: Mainly used for port ing. When the client tries to connect to and use the services provided by the RPC server (such as the NFS service), portmap will provide the managed port corresponding to the service to the client, this allows the customer to request services from the server through this port.
Common NFS directories

Main configuration file of/etc/exports NFS service

/Usr/sbin/exportfs NFS service management command

Command for viewing/usr/sbin/showmount Client

/Var/lib/nfs/etab records the complete permission settings for the directories shared by NFS.

/Var/lib/nfs/xtab records client information that has been logged on

 

Remount the NFS Directory

If we modify/etc/exports after starting NFS, do we have to restart nfs? In this case, we can use the exportfs command to make the change take effect immediately.

Exportfs-arv

 

Cinder node NFS client configuration and installation software

Yum install rpcbindnfs-utils

Start Related Services
servicerpcbind startservicenfslock startchkconfigrpcbind onchkconfignfs onchkconfignfslock on

Check NFS server sharing information

The IP address of the NFS server is 192.168.40.107.

[root@controllernodeimages(keystone_admin)]# showmount -e 192.168.40.107Exportlist for 192.168.40.107:/nfs/shared192.168.40.0/255.255.255.0

Mount to local directory
cd /rootmkdir nfssharemount -tnfs 192.168.40.107:/nfs/shared /root/nfsshare/


View mounting result
[root@controllernode~(keystone_admin)]# df -hFilesystem                           Size  Used Avail Use% Mounted on/dev/sda1                             97G  4.8G  87G   6% /tmpfs                                3.9G  4.0K 3.9G   1% /dev/shm/srv/loopback-device/swift_loopback  1.9G  67M  1.8G   4% /srv/node/swift_loopback192.168.40.107:/nfs/shared           444G 1.4G  420G   1% /root/nfsshare

 

Note that if the NFS server fails at this time or the client cannot connect to the server, the command will be slow because it will not return results until the file system searches for timeout, this principle is used for all commands for the file system, such as df, ls, cp, etc.

 

Cinder node NFS backend storage Configuration

 

Create the/etc/cinder/nfsshares file and edit the file as follows:

192.168.40.107:/home/nfsshare

Set Configuration File Permissions
[root@controllernode~]# chown root:cinder /etc/cinder/nfsshares[root@controllernode~]# chmod 0640 /etc/cinder/nfsshares

The volume service for configuring cinder uses NFS

Modify/etc/cinder. conf to/etc/cinder/nfsshares. Run the following command:

openstack-config --set /etc/cinder/cinder.conf \DEFAULT nfs_shares_config /etc/cinder/nfsshares
To configure the driver used by cinder volume, run the following command:
openstack-config --set /etc/cinder/cinder.conf \DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver

Restart the service

service openstack-cinder-volume restart<span style="font-size:14px;"> </span>

View the disk status on the client


Last Record added

Create a virtual machine, network hard disk, and mount the network hard disk to the host

Console:


Virtual Machine:

 

Attach a hard disk to a VM

Use the following command to format

mkfs.ext4 /dev/vdb

Then execute the following command:

 

Problem

The following exception occurs in nova/compute. log during mounting.

 

2014-10-2312:23:28.193 1747 INFO urllib3.connectionpool [-] Starting new HTTP connection(1): 192.168.40.2482014-10-2312:23:28.395 1747 WARNING nova.virt.libvirt.utils[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] systool is not installed2014-10-2312:23:28.449 1747 WARNING nova.virt.libvirt.utils[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] systool is not installed2014-10-2312:23:28.451 1747 INFO urllib3.connectionpool [-] Starting new HTTP connection(1): 192.168.40.2482014-10-2312:23:28.960 1747 ERROR nova.virt.block_device[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] Driver failed to attach volume a1862c54-0671-4cc5-9fce-5e5f8485c21fat /dev/vdb2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] Traceback (most recent call last):2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line239, in attach2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    device_type=self['device_type'], encryption=encryption)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line1267, in attach_volume2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    disk_dev)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",line 68, in __exit__2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    six.reraise(self.type_, self.value, self.tb)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line1254, in attach_volume2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    virt_dom.attachDeviceFlags(conf.to_xml(), flags)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 183, in doit2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    result = proxy_call(self._autowrap, f, *args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 141, in proxy_call2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    rv = execute(f, *args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 122, in execute2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    six.reraise(c, e, tb)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 80, in tworker2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    rv = meth(*args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib64/python2.6/site-packages/libvirt.py", line419, in attachDeviceFlags2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed',dom=self)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] libvirtError: internal error unable toexecute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1'could not be initialized


Solution

This error is from libvirt. Make the following settings to check whether the kernel _use_nfs is off or on.

/usr/sbin/getseboolvirt_use_nfs

If it is off, do the following settings

/usr/sbin/setsebool -P virt_use_nfs on



Openstack Problems

OpenStack actually has three storage-related components. The degree of familiarity between these three components is the same as that of the component's appearance time. They are listed as follows:
Swift: Provides Object Storage, which is similar to Amazon S3 in concept. However, swift is highly scalable, redundant, and persistent and compatible with S3 APIs.
Glance-provides storage and management of Virtual Machine images, including many features similar to Amazon AMI catalog. (Glance background data is stored in Swift in the first practice ).
Cinder -- provides Block Storage, which is similar to Amazon's EBS Block Storage service. Currently, it is only used for Virtual Machine mounting.
(Amazon has always been an imaginary competitor and challenge object at the beginning of OpenStack design, so basically all key functional modules have corresponding projects. In addition to the three components mentioned above, for the important EC2 services in AWS, The OpenStack corresponds to Nova and maintains compatibility with EC2 APIs. There are different ways to achieve this)
Among the three components, Glance is mainly used to manage VM images. Therefore, it is relatively simple. Swift is very mature as an object storage service and supports CloudStack. Cinder is a relatively new block storage, with a good design philosophy and opportunities to combine with commercial storage, so the manufacturers are more active.
Swift
There are a lot of articles on the Internet except the official website about the architecture and deployment of Swift, which will not be repeated here. (you can also refer to the PPT that I previously gave a speech at the Shanghai site at the OpenStack China Line event ). From the perspective of development, there has not been much structural adjustment recently, so I 'd like to talk about the more suitable application fields.
According to the actual cases I have learned, there are four fields in Swift. (There should be more. I hope you can advise on the actual cases)
1. Network Disk.
Swift's symmetric distributed architecture and multi-proxy multi-node design make it a genetic fit for multi-user high concurrency Application Models. The most typical application is a network disk application similar to Dropbox, dropbox has already exceeded 0.1 billion users at the end of last year. For such access, a good architecture design is the root cause of support.
The Swift symmetric architecture makes the data nodes logically at the same level. Each node has both data and related metadata. In addition, the core data structure of metadata uses a hash ring. Consistent hash algorithms only need to relocate a small part of data in the ring space for node increase and decrease, which is highly fault tolerant and scalable. In addition, the data is stateless, and each data is fully stored on the disk. These factors guarantee the excellent scalability of the storage.
In addition, in combination with applications, Swift refers to the HTTP protocol language, which makes the interaction between applications and storage simple and does not need to consider the details of the underlying infrastructure, the application software can be extended to a very large extent without any modification.
2. IaaS public cloud
Swift's linear expansion in design, high concurrency and multi-tenant support make it ideal for IaaS, with a large public cloud, in this case, a large number of virtual machines are started concurrently. Therefore, for the background storage of Virtual Machine images, the actual challenge lies in the concurrent read performance of big data (larger than G, swift was initially used as the background storage of the Image Library in OpenStack. After years of practice in deploying thousands of machines in RACKSpace, Swift has proved to be a mature choice.
In addition, to provide the upper-layer SaaS Service Based on IaaS, multi-tenant is an inevitable problem. The Swift architecture design itself supports multi-tenant architecture, which makes it easier to connect.
3. Backup Archiving
RackSpace's main business is data backup and archiving, So Swift has been tested for a long time in this field, and they have extended a new business-hot archiving ". Due to the long tail effect, data may be called longer and longer time windows. Hot archiving can ensure that the Application archive data can be retrieved again in minutes, and several hours in the traditional tape drive archiving solution ...... remaining full text>
 
List of existing functions of openstack?

Function list:
Computing nova
Network neutron
Object Storage Service (swift)
Block Storage cinder
Auth keystone
Image glance
Dashboard horizon
There are also a bunch of small feature and small projects ....

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.