18th Chapter Network File system (NFS)

Source: Internet
Author: User
Tags root access nfsd

One,Aix under the principle of NFS

The NFS package for AIX includes not only NFS commands and processes, but also Network information Services (NIS) and other services, although NFS and NIS are installed together as a package, but they are independent and can be configured and managed separately.

NFS is a distributed file system that allows users to access files and directories on a remote system that, as if they are local to the user, can use operating system commands to create, delete, read/write remote files and directories, and to set properties for remote files and directories, all of which are done locally. NFS is not limited by machine type, operating system, and network interfaces by using RPC (remote procedure Call).

A host that has an actual disk and that shares this disk through NFS is called an NFS server, and the host that accesses the remote file system via NFS is called an NFS client, and the server client shares the file system operation as an export , The client must install (mount) the file system locally before accessing the file system exported by the server.

The remote file system exported by NFS server is called remote resource , and long writing: Hostname:pathname format. where hostname is the hostname of the remote NFS server, pathname is the absolute path to the exported directory on the server, and the directory where the remote resources are installed on the client is called the mount point

NFS is an application consisting of several different protocols with special features:RPC (Remote Procedure Call Protocol) and XDR (external Data presentation Protocol)

1. Remote Procedure Call (RPC)

RPC consists of a series of procedures that allow one process (the client process ) to have another process (a server process ) perform a procedure call, as if the client process was running the procedure call in its own address space. Because client processes and server processes are two separate processes, it is not necessary to have them run on the same host (although they can also run on the same host)

The generic program (being the client process) sends a call message to the server process, and then waits for a response message. Where the invocation message includes the relevant parameters, and the response message includes the result of the process execution,RPC provides a standard method of data encoding between different systems, called external Data Representation (XDR). Both the RPC invocation message and the RPC response message data are encoded according to the XDR standard.

Applications developed using the RPC library are typically composed of client programs and server programs, a pair of client programs and server programs identified by an RPC program number , a client program and a server program using the same RPC program number, and, in general, A service process (RPC) server uniquely corresponds to an RPC program number that identifies the service it provides by its program number. The server name, RPC program number, and service alias are included in the /etc/rpc file.

2. External Data identification (XDR)

XDR is a method of data encoding in RPC packets. When Vnode points out that the file to be accessed is not local and on the remote system, it uses the XDR protocol to convert the data into XDR format before sending it, instead, when the data is received, the received data is converted from the XDR format to a local-specific data format.

3. No status (stateless)

NFS is a stateless protocol. Stateless means that the NFS server does not have to maintain state information for each of its clients, and the client must remember its own state information.

The disadvantage is that the server must commit all modifications to a stable storage device before answering a request. This means that the server will not only take the file data, but also write the metadata such as the inode and the indirect block back to the disk before returning each answer, or the server may lose data after it crashes.

Second, the NFS daemon

NFS is implemented through the use of many user-level wait processes, a small subset of core code, and Remote Procedure call (RPC) network applications .

1. The waiting Process on NFS servers and clients

If the system is not configured to be an NFS server and an NFS client, then just keep the NFS waiting process required to run. Because the waiting process that runs on the NFS server and the client role wants is different.

A) The waiting process required for the NFS server:

---->PROTMAP: To map a remote procedure call (RPC) program to the Transport layer TCP/IP port number

---->rpc.mount: Responding to a file system installation request from a client

---->NFSD: Executes the client's I/O request.

b) The waiting process required by the NFS client:

---->PORTMAP: Map a remote procedure call (RPC) program to the TCP/IP port number of the transport layer

---->biod: Read in or delay writing data blocks from the client's cache

c) The processes that will run on the NFS server and client computers:

---->rpc.stand: Providing conflict and resiliency capabilities for RPC.LOCKD processes

---->RPC.LOCKD: Handles local or remote lock-in functions

The NFS wait process is started by/etc/rc.nfs and the Portmap process must be started before all NFS waiting processes are started. All NFS processes are controlled by SRC.

The connection activity between the server waiting process and the client waiting process:

2, Portmap waiting process

The Portmap process is started by the/ETC/RC.TCPIP command file.

The main function of the Portmap process is to convert the RPC program number to the port number of the Internet.

An RPC server corresponds to a unique RPC program number that the RPC server tells the Portmap process which port number it listens on for connection requests and which RPC program number is serviced. Through this process, the PORTMAP process knows the Internet port number used by each registered RPC server and knows which program number is available.

The RPC program number is a sign that communicates with each other, and this identifies/assigns/requests related services.

, therpc.mount server process registers with the Portmap process, stating that its program number is 10005, and that the port number for this listener is 35005 and the 32829,PORTMAP process saves this information on a mapping table it maintains. When the client process (mount) needs to communicate with the Rpc.mount process, it first submits the port number of the query Rpc.mount process to the server-side Portmap process (the client knows that the program number of the Rpc.mount process is 10005, this is constant, the port number), and the Portmap process Just tell the client rpc.mount process the port number for this registration is 35005 and 32829, and the client can then communicate with the server-side Rpc.mount process through this port number.

For each server process, the client process only asks the Portmap process once, and this information is saved for the next use.

Because all RPC servers are registered with the PORTMAP process when they are started, the Portmap process must be running on the system that the RPC server is running, which is why the Portmap process must start before all RPC servers are started

3, Rpc.mountd waiting process

Rpc.mountd The wait process responds to the client's installation request, the RPC.MOUNTD process finds out which file system on the server can be exported (export) by reading the/etc/xtab file (which is generated through the #exportfs-a command). In addition, the RPC.MOUNTD process provides a list of currently installed file systems and which client processes they have installed: #showmount-A

The NFS installation diagram is as follows:

1th Step: The client mount process queries the server-side Portmap process for the port number used by the RPC.MOUNTD process

2nd step: The Portmap process on the server tells the RPC.MOUNTD process the corresponding port number to send to the client mount process

3rd Step: The mount process on the client sends a request to install the file system to the RPC.MOUNTD process on the server and transmits the file system directory to be installed to the RPC.MOUNTD process

4th step: The RPC.MOUNTD process on the server checks the /etc/xtab file to verify the availability and access rights for the specified directory (file system)

5th step: If the validation passes, the RPC.MOUNTD process makes the corresponding record in the /etc/rmtab file, and returns a file handle (pointer to the file system directory) to the mount process on the client, indicating that the NFS installation was successful, Otherwise, an error request is returned.

4, Biod and NFSD waiting process

The biod process running on the client is the block input/output waiting process, which is responsible for caching read/write work on the NFS client, and access to the remote file system, which is responsible for the input/output work with the NFSD process on the NFS server. when a user on a client wants to read and write to a file on the server, the Biod process on the client responds first, it performs a "read-ahead, defer-write" request, and then sends the request to the NFSD process on the server. The read and write operation to the remote file system is ultimately handled by the server's kernel process, while the biod process and the NFSD process are just the function of network data communication

The work between the biod process NFSD process does not depend on the Portmap process running.

1th Step: A user application (process) on the client computer sends a request to the local kernel process to read and write files in the remote file system

2nd step: The kernel process on the client to find the required data in the cache area, if found to go directly to 11 steps, otherwise go to step 3rd

3rd Step: The client kernel process sends a read and write request to the remote file system to the local biod process

4th step: The Biod process on the client sends a read-write Request for NFS to the NFSD process on the remote server

5th step: After the NFSD process on the server receives the request, look for the cache area first, if found, go straight to step 9th, if not found, turn 6th step

6th step: The NFSD process on the server asks the kernel process for physical read-write NFS Requests

7th step: The server kernel process reads and writes data to the local disk

8th step: The server kernel process writes the read data to the buffer zone and notifies the NFSD process that data processing is complete

9th step: The NFSD process on the server goes from the cache area to the required data and returns the server's processing results and the required data to the client's biod process.

10th step: The Biod process on the client computer caches the data obtained from the server into the local cache and returns the processing results to the kernel process, notifying the kernel process that processing is complete

11th Step: The client kernel process returns the data from the cache area and the processing results obtained from NFSD to the user application

(through the 9th and 11th steps can be drawn, whether the client or the server, the data read and write must be from the cache area, even if the cache area does not hit, it would rather from the disk first read the data to the cache area, and then hit , but also to maintain only the cache area read and write principle)

You can change the number of default boot processes for Biod and NFSD processes by #smit Chnfs

5, LOCKD and STATD processes

Multi-tasking operating system, there are multiple users to access a file at the same time, NFS is the case, so how to ensure the correctness of file access and file system integrity, you need to try to file a mutually exclusive operation, to ensure that only one user at a time to write to the file. LOCKD and STATD The RPC-based waiting process is used for this purpose.

The Network Locker (LOCKD) process simply handles requests made by the LOCKD process on the kernel or remote system, and the full lock and unlock operation on files and records is done by the kernel. The LOCKD process on the client is contacted by the RPC/XDR package when a lock request is made to the LOCKD process on the server. The LOCKD process requires the STATD process to monitor the lock unlock Service, the LOCKD process on the client and the STATD process on the server are connected to each other, mainly to provide service operations to re-lock the file after the host is resumed in NFS, and the STATD process is always started after the LOCKD is launched into town.

1th Step: The client's application uses a fcntl () locking request to the kernel bar

2nd step: The kernel determines whether the locking request is to a local file or to a remote file, if it is a local file, the corresponding system call to complete the lock, if the remote file, the RPC locking request to the client on the LOCKD process

3rd Step: The LOCKD process on the client sends an RPC file lock request to the LOCKD process on the server

4th step: The LOCKD process of the server submits this request to the kernel, if the kernel discovers that it can lock, locks it, and then returns the successful signal, otherwise it returns the failed signal.

5th step: The LOCKD process on the server sends the processing result (codename) to the LOCKD process on the client

6th step: If the lock is successful, the LOCKD process on the server requires the process to monitor the process on the client, and the client's LOCKD process requires its own STATD process to start monitoring the process on the server, and the machine two STATD processes to query each other's NFS process status by exchanging RPC packets

7th step: The client's LOCKD process returns the lock processing result to its own kernel, and the kernel registers it in its own system lock table for future use

8th step: The kernel on the client computer then returns the lock-processing structure to the application on the client by the way the FCTNL () system calls

When a host restarts due to a crash, its STATD process sends a message to the other host to know that it has crashed, and then the remote host's STATD process clears all the locking of the crashing host.

6. Start/Shut Down Server for NFS

(See Command Summary)

Third, configure Server for NFS

The set of files that complete the NFS service feature is bos.net.nfs.server, so you must install the file set before you configure Server for NFS

1. Smit Tools Export NFS Directory

(See Cloud Notes)

2. Manually Export the NFS directory

There are two steps to manually configure NFS server NFS:

A) Edit the/etc/exportfs file

b) Execute the #exportfs-a command to notify the RPC.MOUNTD waiting process/etc/exports has been modified

The RPC.MOUNTD process automatically reads the/etc/expors file only once when it is started. The/etc/xtab file holds the currently exported directory or file system, which cannot be edited directly by the user (but can be viewed with the cat commands), which can only be modified by the exports command

In the/etc/expots file, one of the exported directories is in a row, in the following format:

Directory: Directories with a full path

Option: Options may or may not take, with the words, there are some of the following settings can be any combination (between the combinations separated by commas )

-ro: The remote user can only access the exported directory in a read-only manner, and if you do not specify RO, the default permission is RW

-rw=hostname[:hostname ...] : Specifies that multiple clients can read and write access to the exported directory, separated by ":" Between multiple user names, and does not specify RW for everyone with RW permissions

-anon=uid: If an unknown user from the client wants to access the exported file system or directory, the UID as a valid user Uid,root user (UID 0) is always known to the NFS server as an unknown user. Unless it is included in the root option below. By default:anon=-2, which means that NFS will accept unsecured requests anonymously and the user sets Anon=-1, which means that anonymous access is forbidden

-root=hostname[:hostname ...] : Assign the root user access to hostname, separate multiple host names with ":", if not set, prove that no host has root access

-access=client[:client ...] : Lists the name of the client with the installation access rights

-sec: When the client accesses this directory, the client is required to use the security protocol generally this line of writing "-SEC=SYS:KRB5P:KRB5I:KRB5:DH" if this is defined on the server side, then on the client side of the/etc/ The option field in the NFS section of the filesystems file, also write "-sec=sys:krb5p:krb5i:krb5:dh"

Also: if an exported directory and one of its parent directories or subdirectories are on the same file system, it is not possible to export his parent directory or subdirectories at the same time . For example,/usr and/usr/loca on the same file system, you cannot export both directories at the same time.

If a row writes only directory, the exported directory can be accessed by any client

3./etc/rmtab file

The/etc/rmtab file records which of the exported local file systems have been installed by which clients. It records each installed file system in the form of hostname:directory , where hostname refers to the name of the client, and directory refers to the directory that is installed locally (on the server). The file is maintained by the RPC.MOUNTD process, where one record is recorded each time an NFS is added, and the line is commented out after each offload of one NFS.

The records in the/etc/rmtab file are valid for a long time, and they remain in the file until the client uninstalls the NFS location. The source of the #showmount-a command is read and generated by this file.

4, cancel the export of the catalog

Smit tool words: #smit rmnfsexp

You can also directly edit the/etc/exports file, and then need to cancel which directory of the export, you find the row, delete the line of content. Save the/etc/exports file. Finally execute the #exportfs-u/TMP/PTF (assuming that the directory to cancel the export is/TMP/PTF)

Iv. Configuring an NFS Client

There are three Forms of NFS directory Installation: predefined installation, direct installation, and automatic installation.

Predefined installation: Lists the directories to be installed in the/etc/filesystems file and which server to install from, so that it can be easily installed later, and is often used for long-term use of an NFS directory on the client.

Direct installation: Install directly using the Mount command

Automatic Installation: The NFS directory is not installed after the client starts, and is automatically installed only when a user accesses the NFS directory or files in it.

1. Predefined installation of an NFS directory

The directory attributes to install are indicated in the/etc/filesystems file, which describes each filesystem in the format section (stanza), where the side contains the local file system and NFS.

/datacenter: Local mount point

Dev: The full path name of the remote file system being installed

VFS: This should be NFS, indicating that the installed virtual file system is NFS

Mount=[true|false]: If true, this NFS is automatically installed when the system boots, and if False, this NFS is not automatically installed

nodename: Indicates the host name of the machine on which the remote file system resides

option: Options with the following options

A) The state of the NFS client with the soft installation is least affected when the NFS server crashes

b) Hard installation type is best for installing a writable NFS file system when it is important to see the integrity of the files

2. Install an NFS directory directly

The direct installation must specify all options with the Mount command and must be done by the root user.

#mkdir/server3

#mount-N server3-o bg,intr,hard Server3:/home/server3

(mount [-f] [-n Node] [-O Options] [-P] [-r] [-v vfsname] [-T Type | [Device | Node:directory] Directory)

3. Automatically install an NFS directory

AIX4.3 previously automatically installed waiting process is automount,4.3 after the automatic installation of the waiting process is automountd,automount become a command, is no longer a waiting process.

Automountd The waiting process monitors the installation point of the specified NFS directory, and when the I/O operation is performed on the installation point for a file, the Automountd daemon executes an RPC tune to install this NFS and automatically creates a directory that does not exist on the client.

First, you need to check the client for the NFS directory that the server exported. Then create a autofs mapping file, the mapping file is divided into direct, indirect mapping of two, the file name arbitrary, put the directory arbitrary.

A) indirect mapping

The format is:

[Installation point used by the AUTOMOUNTD process, relative directory]    [Installation Options] [Server and directory path name on server]

The indirect mapping file is used when you need to indicate the parent directory of the AutoFS installation point directory (here is S3testfs) in the AutoMount command.

b) Direct mapping

The format is:

When mapping files directly to AutoMount, the parent directory is replaced by "/-"

c) Auto.master Mapping

When executing the automount command with no parameters, the AutoMount command looks for the primary mapping file to get the list of automatic mount points and their mappings, and here is an easy way to do this: Execute the automount command with multiple mapping files simultaneously. Includes direct mapping and indirect mapping. This master mapping file can make/etc/auto.master or/etc/auto_master

18th Chapter Network File system (NFS)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.