Reprinted: http://linux.chinaitlab.com/administer/794678.html
Linux has the most notable features of flexibility and scalability, such as its Virtual File System switch (VFS ). You can create a file system on a variety of devices, including traditional disks, USB
Flash drive, memory, and other storage devices. You can also embed a file system in another file system environment. Explore the factors that make VFS so powerful and understand the main interfaces and processes of VFS.
The flexibility and scalability of the Linux File System are directly derived from a set of abstract interfaces. The core of this set of interfaces is the virtual file system switch (VFS ).
VFS provides a set of standard interfaces for upper-layer applications to execute file I/O on different file systems. This set of interfaces supports multiple concurrent file systems on one or more underlying devices. In addition, these file systems can be non-static and change according to storage devices.
VF and VFS
You can also see that VFS is also defined as a virtual file system, but the definition of Virtual File System switch can better describe its function, because the Virtual Layer converts (multiplexing) requests across multiple file systems. /Proc file system brings a lot of confusion here, because it is also called a virtual file system.
For example, a typical Linux Desktop supports ext3 file systems on an available hard disk and ISO 9660 file systems on an available CD-ROM (or known as a CD-ROM file system or CDFs. Because CD-ROM can be inserted and removed, the Linux kernel must accommodate these new file systems that contain different contents and structures. You can access a remote file system through a Network File System (NFS. In this case, can windows from the local hard disk be mounted in Linux? /Linux
The NT File System (NTFs) partition of the dual-boot system, and the ability to read and write data to it.
Finally, the portable USB Flash boot (UFD) can be hot-swappable, which forms another file system. Generally speaking, you can use the same set of file I/O interfaces in these devices to allow the underlying file systems and physical devices to be abstracted from users (see figure 1 ).
Figure 1. abstraction layer that provides unified interfaces between different file systems and storage devices
Layered Abstraction
Now, we add some specific architectures to the abstract features provided by Linux VFS. Figure 2 shows the advanced view of the Linux structure from the perspective of VFS. On the VFS, the standard kernel system call interface (SCI) is used ). This interface allows the user space to send calls that require conversion to the kernel (in different address spaces ). In this domain, user space applications that call POSIX open call go through the gnu c library (glibc) to enter the kernel and system calls to diversify (de-multiplexing ). Finally, use sys_open to call VFS.
Figure 2. VFS layered architecture
Early VFS implementation
Linux is not the first operating system that includes a virtual layer to support a common file model. Earlier VFS implementations included Sun's VFS (SunOS version 2.0, around 1985), and IBM and Microsoft? "Installable file system" for ibm OS/2. These virtualization file system layer Methods pave the way for Linux VFS.
VFS provides an abstraction layer to separate POSIX APIs from the details of how specific file systems implement this behavior. The key here is that, no matter whether the underlying file system is ext3 or btrfs, the open, read, write or close API system calls can work normally. VFS provides a general file model inherited by the underlying File System (which must implement behavior for various posix api functions. Another deep abstraction beyond the scope of VFS hides underlying physical devices (possibly disks, disk partitions, network storage entities, memory, or other media capable of storing information-even temporary ).
In addition to abstracting file operations from the underlying file system, VFS also binds underlying Block devices to available file systems. Let's take a look at the internal structure of VFS and its working principles.
Internal Structure of VFS
Before viewing the overall architecture of the VFS subsystem, let's take a look at the main objects used. This section explores superblocks, index nodes (or inode), directory entries (or dentry), and file objects. Here, other components are also important, such as cache. However, I will discuss them later in the overall architecture.
Ultra Block
Superblock is a container for advanced metadata of a file system. A superblock is a structure that exists on a disk (actually located in multiple locations of the disk to provide redundancy. It is the basis for processing the file system on the disk, because it defines the management parameters of the file system (such as the total number of blocks, idle blocks, and root index nodes ).
On a disk, a superblock provides the kernel with information about the structure of the file system on the disk. In the memory, the superblock provides the necessary information and status for the management activity (mounted) file system. Because Linux supports mounting multiple concurrent file systems at the same time, each super_block structure (super_blocks in./Linux/fs/super. c) is maintained in a list.
The structure is defined in/Linux/include/fs. h ).
Figure 3 provides a simplified view of hyperblocks and their elements. Super_block structure refers to many other structures that encapsulate other information. For example, the file_system_type structure maintains the name of the file system (such as ext3) and various locks and functions to obtain and delete super_block. File_system_type objects are managed by common register_file system and unregister_file system functions (see./Linux/fs/file systems. C ). The super_operations structure defines a large number of functions for read and write nodes and advanced operations (such as remounting. The root directory entry (dentry) object is also cached here because it is the block device of the file system. Finally, a number of lists are provided for managing nodes, including
S_inodes (list of all nodes), s_dirty (list of all dirty nodes), s_io, s_more_io, and s_files (list of all open files in a specific file system ).
Figure 3. Simplified View of super_block structure and its elements
Note: Inside the kernel, another management object called vfsmount provides information about mounted file systems. The list of these objects references the superblock and defines the mount point, the name of the/dev device where the file system is located, and other advanced additional information.
Inode)
Linux manages all objects in a file system through an object called inode (abbreviation of index node. An inode can reference symbolic links of one file, directory, or another object. Note: Because files are used to represent other types of objects (such as devices or memory), inode is also used to represent them.
Inode I refer to here is a VFS layer inode (resident inode ). Each file system also contains an inode on the disk and provides details about specific objects of a specific file system.
VFS inode uses the slab distributor for allocation (from inode_cache; reference section provides a link to the slab distributor ). Inode consists of data and operations that describe inode, inode content, and various operations that may occur on inode. Figure 4 shows a VFS inode. The inode contains multiple lists, one of which points to the dentry that references the inode. It also contains object-level metadata, including the familiar operation time (Creation Time, access time and modification time) and owner and permission data (group ID, user ID, and permission ). Inode
References the file operations permitted by it. Most of these operations are directly mapped to system calling interfaces (such as open, read, write, and flush ). Inode also references inode-specific operations (create, lookup, Link, mkdir, and so on ). Finally, there is a management structure for the data of Objects represented by address space objects. An address space object is an object of various pages in the inode Management page cache. The address space object is used for file management pages, and also for ing the file part to an independent process address space. An address space object has its own operation set (writepage, readpage, and releasepage ).
Figure 4. Simplified VFS inode Diagram
Note: you can find all this information in./Linux/include/Linux/fs. h.
Directory Entry (dentry)
The hierarchical structure of the file system is managed by another dentry object in VFS. The file system has a root dentry (referenced in a superblock), which is the only dentry with no parent object. All other dentry objects have parent objects, and some dentry objects have sub-objects. For example, if you open a file consisting of/home/user/Name, four dentry objects will be created: A root/, a home entry for the root directory, a name entry for the user directory, and a name entry for the user directory. In this way, dentry maps to the currently used file system in a concise manner.
The dentry object is defined by the dentry structure (in./Linux/include/fs/dcache. h ). It consists of many elements that track the relationship between entries (such as file names) in the file system and physical data ). Figure 5 shows a simplified diagram of the dentry object. Dentry references super_block. super_block defines the specific file system instance that contains the object. The next step is the parent dentry of the object, followed by the sub-dentry contained in the List (if the object happens to be a directory ). Then, define operations for dentry (such as hash, compare, and delete)
And release ). Then define the object name. Here the name is saved in dentry instead of inode. Finally, a reference to VFS inode is provided.
Figure 5 Simplified dentry Object Representation
Note that the dentry object only exists in the file system memory and cannot be stored on the disk. Only the inode of the file system is permanently stored. The purpose of the dentry object is to improve performance. You can see the full description of dentry in./Linux/include/dcache. h.
Object
Each file opened in Linux has a file object. This object provides information about opened instances for specific users. Figure 6 provides a simplified view of the file object. As shown in the figure, the path structure provides references to dentry and vfsmount. Defines a set of file operations for each file. Common file operations include open, close, read, write, and flush. Define a group of logs and permissions (including groups and owners ). Finally, define status data for a specific file instance, such as the current offset of the file.
Figure 6. simplified representation of file objects
Object relationship
We have viewed various important objects in the VFS layer. Now we can use a chart to show the relationships between them. So far, I have been exploring objects in a bottom-up approach. Now we use a top-down approach to examine objects from the user perspective (see figure 7 ).
At the top layer, an open file object is referenced by the file descriptor list of the process. The file object references the dentry object, which references inode. Inode and dentry objects both reference the underlying super_block object. Multiple File objects may reference the same dentry (when two users share the same file ). Note: In Figure 7, a dentry object also references another dentry object. Here, the directory references the file, and the file references the inode of the specific file in turn.
Figure 7. Relationship between main objects in VFS
VFS Architecture
The internal architecture of VFS consists of a scheduling layer (providing File System Abstraction) and many caches (used to improve the performance of file system operations. This section explores the interaction between the internal architecture and main objects (see figure 8 ).
Figure 8. Advanced view at the VFS Layer
Two main objects dynamically managed in VFS are dentry and inode objects. Cache these two objects to Improve the Performance of accessing the underlying file system. When a file is opened, the dentry cache is filled with entries indicating the directory level (directory level indicates the path. In addition, an inode indicating the file is created for this object. Create a dentry cache using the hash and allocate the cache based on the object name. Dentry cached entries are allocated from the dentry_cache slab distributor and are deleted using the least-recently-used (LRU) algorithm when the cache is under pressure. You can go to./Linux/fs/dcache. c
And./Linux/include/Linux/dcache. h to find functions related to the dentry cache.
To achieve faster search speed, the inode cache is implemented into two lists and one hash. The first list defines the currently used inode; the second list defines unused inode. Inode in use is also stored in the hash. Allocate a single inode cache object from the inode_cache slab distributor. You can find the inode cache-related functions in./Linux/fs/inode. C and./Linux/include/fs. h. In the current implementation, the dentry cache controls the inode cache. If a dentry object exists, inode
An inode object also exists in the cache. The search is executed in the dentry cache, which causes an object to appear in the inode cache.
Conclusion
This article explores basic concepts of VFS and objects that provide unified interfaces for accessing different file systems. Linux has great scalability and flexibility, and can be expanded from subsystems. You can learn more about VFS from the links provided in the references section.