Storage subsystem analysis-FCoE module from the storage subsystem perspective

Source: Internet
Author: User
Tags fcoe



Note: In the previous article on FCoE module design and implementation, we talked about the FCoE module design. Although I have clearly explained the composition of the FcoE module, FCoE is always a little limited from the perspective of the entire storage subsystem, the interaction between the storage sub-system and the network sub-system is not clear. I hope these questions can be answered here.

We know that the Linux kernel is a hierarchical design model, and the storage subsystem is no exception. The figure below (from the network) shows the hierarchical module of the storage subsystem in linux.




E.g. Application

Therefore, if an application in a user space (which is written in C) needs to read a file stored on the remote scsi disk, then this application needs to use the c library program and then call the system call read () provided by the kernel. The system call will access VFS, that is, the virtual file system, so what is a virtual file system? The virtual file system is actually the abstract layer of the file system (the design idea of reusing code ?), Then, select the correct file system from VFS, because even some file systems are network file systems. Then to the real file system, the file system is a management system that stores organizational files on the disk. The file system finds the corresponding block of the file. Then go to the block device layer. The driver layer of the file system and block device has a buffer cache layer, which caches a piece of memory and related data structures on the disk.


Block device drivers need to deal with the following interfaces. For the scsi subsystem, or for more specific fcoe interfaces, what is the interaction process? First, let's take a look at the functions of block devices.



Block Device

When the unix system was just written, it had a bold design, that is, to regard all physical devices as files. However, there are differences between different devices, including random access and sequential access. But why do I need to use the same interface (file system) to communicate with a printer or a disk device? Even if they are different, they can still be abstracted as files, and this can make the entire system very simple.

Character devices, also known as rawdevices, such as printers and terminals, can read and write data to character devices using the APIs provided by the character device driver based on the file system. The block device is/dev/dsk in the file system. The block device is randomly accessed, and the file system can mount the block device instead of the character device.


Block device driver

A block device driver is a bridge between a disk block device and a file system. The system can load the file system to block devices. The block device driver provides such a loading function, and then enables the file system to read and write block devices.

The block device driver may be a scsi subsystem or another disk device driver that connects directly to the disk. As for the scsi subsystem, we will introduce it separately below.



Scsi subsystem

SCSI is a set of standard sets that define interfaces and protocols required to communicate with a large number of devices. Linux provides a SCSI subsystem for communication with SCSI devices. SCSI is suitable for reliable, high-performance, and remote storage.


SCSI is a client/server model. The client issues SCSI commands, and then the server receives and processes SCSI commands. The SCSI target usually provides one or more logical unit numbers (LUNs) for the startup program ), the actual scsi I/O operations only process such entities. In the storage area, a LUN usually indicates a disk on which a host can perform read/write operations.

 

For SCSI, I will introduce the FCoE module design and implementation in detail. In short, the upper SCSI layer receives and converts requests from the upper layer (such as the general block layer and file system) to SCSI requests, completes SCSI commands, and notifies the upper layer of status information. The middle layer is the upper and lower public service layers. The underlying layer is a group of drivers, known as the SCSI underlying driver, which can communicate with physical devices.


Note:Learning block device programming and scsi block device programming? Reference http://www.tldp.org/LDP/khg/HyperNews/get/devices/scsi/1.html



FCoE module

From the above analysis, we found that the FCoE module is the underlying driver of the SCSI sub-system in the Linux kernel, used to connect the SCSI sub-system and the Ethernet system, mainly to complete the conversion of SCSI commands to FCoE frames.

The protocol processing of the FCoE module depends on the scsi_transport_fc module, libfc module, and libfcoe module.


Note:The FCoE module is designed and implemented in a more detailed manner. The related component of the FCoE module is no longer repeated. I/O component is mainly used.


The I/O module completes FCoE protocol processing, including the FC frame generated by the SCSI command (or data), the ing between the FC frame and the FCoE frame, the FCoE frame encapsulated into the Ethernet frame and sent to the Ethernet core. The FCoE module must combine the SCSI system, libFC module, and Ethernet system to complete IO requests. The main types of IO requests on the initiator are read requests and write requests.


The Ethernet module is the network processing module of the Linux kernel. It operates on the l2 layer of the network and sends data packets to the physical layer (and then reaches the storageserevr through The FCoE switch ), you can also forward the link layer to another layer. For example, the upper layer of the network refers to l3, or forward the content of the header to the FCoE module, after being processed by the FCoE module, it is handed over to the SCSI subsystem and storage subsystem for processing.


######## What is the layout of the storage subsystem in the virtualization environment? Perhaps the following simple analysis is worth your reference.

Compared with the storage subsystem in the host, the storage subsystem in the virtual machine has more layers (including both the VM and hypervisor and host, is there greater flexibility ?).

For example, fcoe-based storage virtualization includes virtual disk, raw disk, and vm based storage.


Virtual disk is a SCSI layer virtualization. Virtual Machines access storage devices directly through the SCSI layer. Virtual machines do not need to consider whether their storage devices are network storage or local storage. Virtual disk is a common model that can run on mainstream virtualization platforms without modifying the virtualization platform and kernel code. The storage method in [IOFlow] in this paper is this Virtual disk.

The implementation level of Raw disk is the FCoE protocol stack, which is closely related to the FCoE mechanism principle (FCoE virtualization.

NAS (Network Attached Storage) and iSCSI can access Storage devices through virtual machine Network devices, without the need for VMM (VirtualMachine Monitor) to map Storage devices to virtual machines. The above access storage device mode is called VM-based storage, which I deployed in the experiment.

 



P.s.Here is just a rough introduction of the storage subsystem. If you want to be proficient in the storage subsystem, you 'd better look at the Linux kernel source code, and take a look at the network subsystem source code ~~


Storage subsystem analysis-FCoE module from the storage subsystem perspective

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.