Openstack storage Summary: explains how to use NFS as the cinder's back-end Storage

Source: Internet
Author: User
Tags nfsd
Brief description of NFS service

NFS is short for network file system, that is, Network File System. An agreement for the use of distributed file systems, developed by Sun, was published on April 9, 1984. The function is to allow different machines and different operating systems to share individual data with each other through the network, so that applications can access data on server disks through the network on the client, it is a way to share disk files between Unix-like systems.

 

The basic principle of NFS is "allow different clients and servers to share the same file system through a group of RPC", which is independent of the operating system, allows different hardware and operating system systems to share files.

 

NFS depends on the RPC protocol during file transfer or information transfer. Remote Procedure Call (RPC) is a mechanism that enables the client to execute programs in other systems. NFS itself does not provide information transmission protocols and functions, but NFS allows us to share data over the network, because NFS uses some other transmission protocols. These transmission protocols use this RPC function. NFS itself is a program that uses RPC. Or NFS is also an RPC server. Therefore, the RPC service must be started wherever NFS is used, whether it is an NFS server or an NFS client. In this way, the server and client can implement the corresponding program port through rpc. We can understand the relationship between RPC and NFS in this way: NFS is a file system, while RPC is responsible for information transmission.

 

NFS server

 

NFS Server Installation

yum -y install rpcbind nfs-utils

NFS server configuration

The configuration on the NFS server is relatively simple. It only involves modifications to the/etc/exports file. The configuration content is as follows:

/nfs/shared192.168.40.0/255.255.255.0(rw,sync)

The above configuration indicates "192.168.40.0/255.255.255.0". Users of this network segment can mount the/nfs/shared directory on the NFS server. After mounting, they have read and write permissions. Because they do not specify the method for compressing user permissions, therefore, even if you log on as the root user, it will be downgraded to nobody.

Start NFS service
service rpcbind startservice nfs startservice nfslock startchkconfig rpcbind onchkconfig nfs onchkconfig nfslock on


NFS service verification

The server uses the showmount command to query the NFS sharing status.

# Showmount-E // by default, you can view your shared services, provided that the DNS can resolve yourself, otherwise it is easy to report an error # showmount-A // display the directory information that has been connected to the client


The client uses the showmount command to query the NFS sharing status.

# Showmount-e NFS server IP Address

NFS system daemon
  • NFSD: it is a basic NFS Daemon. Its main function is to manage whether the client can log on to the server;
  • MOUNTD: it is the RPC installation daemon. Its main function is to manage NFS file systems. After the client successfully logs on to the NFS server through NFSD, it must pass the File Permission verification before using the files provided by the NFS service. It reads the NFS configuration file/etc/exports to compare the client permissions.
  • Portmap: Mainly used for port ing. When the client tries to connect to and use the services provided by the RPC server (such as the NFS service), Portmap will provide the managed port corresponding to the service to the client, this allows the customer to request services from the server through this port.
Common NFS directories

Main configuration file of/etc/exports NFS service

/Usr/sbin/exportfs NFS service management command

Command for viewing/usr/sbin/showmount Client

/Var/lib/nfs/etab records the complete permission settings for the directories shared by NFS.

/Var/lib/nfs/xtab records client information that has been logged on

 

Remount the NFS Directory

If we modify/etc/exports after starting NFS, do we have to restart NFS? In this case, we can use the exportfs command to make the change take effect immediately.

Exportfs-ARV

 

Cinder node NFS client configuration and installation software

Yum install rpcbindnfs-utils

Start Related Services
servicerpcbind startservicenfslock startchkconfigrpcbind onchkconfignfs onchkconfignfslock on

Check NFS server sharing information

The IP address of the NFS server is 192.168.40.107.

[[email protected](keystone_admin)]# showmount -e 192.168.40.107Exportlist for 192.168.40.107:/nfs/shared192.168.40.0/255.255.255.0

Mount to local directory
cd /rootmkdir nfssharemount -tnfs 192.168.40.107:/nfs/shared /root/nfsshare/


View mounting result
[[email protected]~(keystone_admin)]# df -hFilesystem                           Size  Used Avail Use% Mounted on/dev/sda1                             97G  4.8G  87G   6% /tmpfs                                3.9G  4.0K 3.9G   1% /dev/shm/srv/loopback-device/swift_loopback  1.9G  67M  1.8G   4% /srv/node/swift_loopback192.168.40.107:/nfs/shared           444G 1.4G  420G   1% /root/nfsshare

 

Note that if the NFS server fails at this time or the client cannot connect to the server, the command will be slow because it will not return results until the file system searches for timeout, this principle is used for all commands for the file system, such as DF, ls, CP, etc.

 

Cinder node NFS backend storage Configuration

 

Create the/etc/cinder/nfsshares file and edit the file as follows:

192.168.40.107:/home/nfsshare

Set Configuration File Permissions
[[email protected]~]# chown root:cinder /etc/cinder/nfsshares[[email protected]~]# chmod 0640 /etc/cinder/nfsshares

The volume service for configuring cinder uses NFS

Modify/etc/cinder. conf to/etc/cinder/nfsshares. Run the following command:

openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfsshares
To configure the driver used by cinder volume, run the following command:
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver

Restart the service

service openstack-cinder-volume restart<span style="font-size:14px;"> </span>

View the disk status on the client


Last Record added

Create a virtual machine, network hard disk, and mount the network hard disk to the host

Console:


Virtual Machine:

 

Attach a hard disk to a VM

Use the following command to format

mkfs.ext4 /dev/vdb

Then execute the following command:

 

Problem

The following exception occurs in Nova/compute. log during mounting.

 

2014-10-2312:23:28.193 1747 INFO urllib3.connectionpool [-] Starting new HTTP connection(1): 192.168.40.2482014-10-2312:23:28.395 1747 WARNING nova.virt.libvirt.utils[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] systool is not installed2014-10-2312:23:28.449 1747 WARNING nova.virt.libvirt.utils[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] systool is not installed2014-10-2312:23:28.451 1747 INFO urllib3.connectionpool [-] Starting new HTTP connection(1): 192.168.40.2482014-10-2312:23:28.960 1747 ERROR nova.virt.block_device[req-5bf92b88-6d15-4c41-8ed7-3325fdea0dcf 5832a2295dc14de79522ee8b42e7daac9207105ae2ac4ef3bdf5dfe40d99fd8d] [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] Driver failed to attach volume a1862c54-0671-4cc5-9fce-5e5f8485c21fat /dev/vdb2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] Traceback (most recent call last):2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line239, in attach2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    device_type=self['device_type'], encryption=encryption)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line1267, in attach_volume2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    disk_dev)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py",line 68, in __exit__2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    six.reraise(self.type_, self.value, self.tb)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line1254, in attach_volume2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    virt_dom.attachDeviceFlags(conf.to_xml(), flags)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 183, in doit2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    result = proxy_call(self._autowrap, f, *args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 141, in proxy_call2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    rv = execute(f, *args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 122, in execute2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    six.reraise(c, e, tb)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib/python2.6/site-packages/eventlet/tpool.py",line 80, in tworker2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    rv = meth(*args, **kwargs)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]  File "/usr/lib64/python2.6/site-packages/libvirt.py", line419, in attachDeviceFlags2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a]    if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed',dom=self)2014-10-2312:23:28.960 1747 TRACE nova.virt.block_device [instance:eb1742c6-1e73-4656-b646-ca8442519e7a] libvirtError: internal error unable toexecute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1'could not be initialized


Solution

This error is from libvirt. Make the following settings to check whether the kernel _use_nfs is off or on.

/usr/sbin/getseboolvirt_use_nfs

If it is off, do the following settings

/usr/sbin/setsebool -P virt_use_nfs on


Openstack storage Summary: explains how to use NFS as the cinder's back-end Storage

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.