NFS Protocol
NFS (Network File system) is not a file system in the traditional sense, but rather a network protocol that accesses a remote file system. As shown in the TCP/IP protocol stack for the entire NFS service, NFS is the application layer protocol, the presentation layer is XDR, the session layer is RPC, the Transport layer supports both UDP and TCP, and the network layer is the IP protocol. Nfs/xdr/rpc and other protocol specification details in the "TCP/IP Detailed Volume 1: Agreement" 29th chapter has detailed description, here no longer repeat.
over the years, there are multiple versions of the NFS protocol, each with a corresponding RFC specification, such as RFC1813. The version comparison for each NFS is shown below.
NFS provides services externally through the NFS process. The following is a set of NFS procedures defined in RFC1813.
- Null () returns (): Performs no action and has two functions: equivalent to pinging to the server and detecting RTT between client and server (Round trip time).
- Lookup (DIRFH, name) returns (FH, attr): Returns the Fhandle and attribute information for a specified file in the directory.
- Create (DIRFH, name, attr) returns (NEWFH, attr): Creates a new file and returns its Fhandle and property information.
- Remove (DIRFH, name) returns (status): Deletes a file from the specified directory.
- GetAttr (FH) return (attr): Returns the File property information. This call is similar to a stat call.
- SetAttr (FH, attr) returns (attr): Sets the Mode,uid,gid,size,access time,modify time property of a file. Setting the file size to 0 is equivalent to calling truncate on the file.
- Read (FH, offset, count) returns (attr, data): Returns up to count bytes of data starting at offset offsets from the file. The read operation also returns property information for the file.
- Write (FH, FH, offset, count, data) returns (attr): Writes the data of Count bytes to the file offset offset, returning the file property information after the write operation is complete.
- Rename (dirfh, name, TOFH, ToName) returns (status): Renames the file named name in the DIRFH directory to a file named Tofh in the ToName directory.
- Link (dirfh, name, TOFH, ToName) returns (status): Creates a link in the Tofh directory named ToName, pointing to the name file in the DIRFH directory.
- Symlink (DIRFH, name, String) returns (status): Creates a symbolic link in the DIRFH directory named name. The server does not interpret the exact contents of the string, but simply saves it and associates it with the symbolic link file.
- mkdir (DIRFH, name, attr) returns (FH, newattr): Creates a directory named name in the DIRFH directory and returns its fhandle and attribute information.
- RmDir (DIRFH, name) returns (status): Removes an empty directory named name from DIRFH.
- Readdir (DIRFH, Cookie, count) returns (entries): Returns the group multiple count bytes of directory entry information from the DIRFH directory. Each directory entry information contains a filename, a file ID, and a pointer cookie that is interpreted by the server to point to the next directory entry. The function of a cookie is to return directory entry information from a specified location in subsequent readdir operations. The Readdir call of the cookie 0 returns from the first directory entry in the directory.
- Statfs (FH) return (Fsstats): Returns information about the file system, such as block size, number of free blocks, and so on.
NFS Feature comparison NFSv3 characteristics comparison
- The maximum file size supported by the V2 is 2GB (32bit) and the V3 is larger (64bit).
- V2 limits the data that each read and write process can read and write to 8,192 bytes, and V3 cancels the limit. The number of read and write bytes for RPC is limited only by TCP/IP.
- V3 introduces a new NFS process commit, supports asynchronous writes, and improves write performance.
- V3 introduces the new NFS process access, which supports the service side ACL access check.
- V3 introduces the new NFS process Readdirplus, which returns the file handle and attributes, which reduces the number of calls to lookup.
- V3 is optimized for RPC commands, and each RPC that affects file properties returns a new attribute, which reduces the number of calls to GetAttr.
NFSv4 characteristic comparison
- The V3 is stateless and V4 begins to support the state. Improve the file system's ability to recover abnormally.
- V4 supports file delegation (clients can work on local replicas until other clients request the same file), improving filesystem consistency issues.
- V4 introduces a new NFS process compound that supports a compound request that contains multiple NFS processes. Increase the ability to express requests and reduce the number of calls to RPC requests.
- V4 enforces support for RPCSEC/GSS to improve file system security issues.
- V4 supports encrypted ACLs and improves file access management.
- The V4 server supports a unified view of the pseudo-file system for clients. All export directories on the server must be in a pseudo file system root export directory.
- The IP of the V3 client is automatically adaptable, and the V4 client supports the mount CLIENTADDR parameter, which specifies the client-specific IP address.
NFSv4.1 characteristic comparison
- The client can access the storage device in parallel.
- Support for multiple service-side.
- Supports file system meta-data and data separation.
- Delegation feature Support directory.
- Support the session mechanism, improve the fault chain, crash and other abnormal recovery capabilities.
Linux NFS Implementations and instances
The LInux NFS architecture is a typical CS architecture, as shown in the architecture. The service-side application consists of the following parts:
- Portmap: Port mapper, the main function is to perform port mapping work for RPC programs. When a client attempts to connect and use a service provided by the RPC server, such as an NFS service, PORTMAP provides the managed port to the client, which enables the client to request services from the server through that port.
- Rpc.mountd:NFS Mount Daemon, the main function is to implement the NFS Mount protocol, which is responsible for mounting/uninstalling NFS File system and Rights management. It reads the NFS configuration file/etc/exports to compare Client access rights. After the mount succeeds, the client obtains a file handle (FH) for the server file system.
- The Rpc.nfsd:NFS server daemon is the user-state portion of the NFS service that is responsible for creating the NFSD kernel process. It should be suggested that most of the functions of the NFS service are handled by the NFSD kernel process.
As you can see, most of the functionality of the NFS service is implemented by kernel modules, and in addition to the kernel modules shown in the diagram, the kernel provides several kernel daemons:
- NFSD: The primary role is to process RPC requests for NFS.
- Nfsiod: The main role is to provide NFS clients with efficient buffering mechanisms, such as pre-read, time-delay write, etc., thereby improving the performance of NFS file system.
- Rpciod: The primary role is as a daemon for RPC (FAR procedure Call service) to start the I/O service from the client.
is an NFS protocol message Flow graph instance (see larger image), which contains some typical network file system operation scenarios such as:
- Service registration Process
- NFS Mount: Mount 168.0.155.1:/datadisk0/tmp
- Change Working directory: cd/tmp
- View files under directory: LS
- Read files: more tail Bootcfg.ini
Linux NFS Debugging for NFS application debugging
To open the Application debugging feature:
/usr/sbin/portmap-d
/usr/sbin/rpc.mountd-d All
/usr/sbin/rpc.nfsd-d-S
To view NFS configuration and logging:
Cat/etc/exports
Cat/var/lib/nfs/rmtab
Cat/var/lib/nfs/etab
Cat/var/lib/nfs/xtab
Cat/var/lib/nfs/state
Tail/var/log/messages
NFS Kernel module debugging
To turn on the NFS Module debug feature:
Sysctl-w sunrpc.nfs_debug=2147483647
Sysctl-w sunrpc.nfsd_debug=2147483647
To view NFS-related statistics and logs:
Cat/proc/slabinfo | grep NFS
/proc/fs/nfsfs/
Nfsstat
Dmesg
TCP/IP module debugging
To open the RPC Module Debug feature:
Sysctl-w sunrpc.rpc_debug=2147483647
To view RPC-related statistics and logs:
Cat/var/run/portmap_mapping
Cat/proc/net/rpc/nfs
Cat/proc/net/rpc/nfsd
To view TCP/IP related statistics and configuration:
Ping//view network conditions
Netstat-tpwn | grep 2049//view NFS TCP link
cat/proc/sys/net/ipv4/.//view network configuration, such as TCP_RETRIES2, etc.
Cat/proc/net/rpc/nfs
Fields such as CAT/PROC/NET/RPC/NFSD//deciles
Fields such as CAT/PROC/NET/SNMP//Ip:reasmfails
Network capture:
Tcpdump-s 9000-w/tmp/dump.out Port 2049
Other nfs-utils transplant
./configure \
CC=XX-GCC \
--build=$ (./config.guess) \
--HOST=MIPS64-UNKNOWN-LINUX-GNU \
ldflags= "-l/usr/local/lib" \
cppflags= "-i/usr/local/include" \
--disable-tirpc--DISABLE-GSS--disable-uuid--without-tcp-wrappers--with-gnu-ld
Make
Make install
File system Export conditions
The file system exported by NFS is configured by the configuration file/etc/exports, and the file system that can be exported needs to meet the following 2 conditions:
- The file system must have a device number (Fs_requires_dev, which is a storage device) or FSID number (requires Nfsexp_fsid or->uuid).
- The file system must support the S_export_op interface. The file systems that support the S_export_op interface are storage device file systems such as EXT3/4, Ubifs, and so on. Other file systems such as Rootfs, Ramfs, SYSFS, etc. are not supported.