The system is CentOS5.6. Assume that the NFS Server IP address is 192.168.1.2 and the NFS Clinet IP address is 192.168.0.100. 1. NFS is installed on the server: yum install nfs-utils protmapnfs-utils provides NFS server programs and corresponding management tools. Protmap is a program for managing RPC connections. portmap is required for NFS because it is a dynamic port distribution daemon for NFS. If portmap is not started, NFS cannot be started. 2. configure the NFS server and edit the/etc/exports file: vim/etc/exports to set the shared file directory:/home/nfsdir * (rw)/home/share 192.168.0.100 (rw, sync, no_root_squash) * (ro) Where:/home/nfsdir * (rw) indicates sharing the/home/nfsdir directory, and all users have read and write permissions. /Home/share 192.168.0.100 (rw, sync, no_root_squash) * (ro) indicates the shared/home/share directory. 192.168.0.100 has read and write permissions and the root user has full management and access permissions, other machines only have read-only permissions. The configuration file is in the format of [shared directory] [host name or IP address (parameter, parameter)]. The parameter is optional. If no parameter is specified, nfs uses the default option. The default sharing options are sync, ro, root_squash, and no_delay. If the host name or IP address is empty, it indicates that the shared IP address is used to provide services to any client. When the same directory is shared to multiple clients but the permissions provided to each client are not the same, you can: [shared directory] [host name 1 or IP1 (parameter 1, parameter 2)] [host name 2 or IP2 (parameter 3, parameter 4)] below are some common NFS sharing parameters: ro read-only access rw read/write access sync write data to the memory and hard disk async data will be saved in the memory first, instead of writing data directly to the hard disk secure NFS, send insecure NFS through the secure TCP/IP ports below 1024 and send wdelay through ports above 1024. If multiple users want to write data to the NFS Directory, group write (default) no_wdelay if multiple users want to write data to the NFS Directory, write the data immediately. This setting is not required when async is used. Hide does not share its subdirectory in the NFS shared directory. no_hide shares the subdirectory subtree_check of the NFS directory. If you share a subdirectory such as/usr/bin, force NFS to check the permissions of the parent directory (default) no_subtree_check is relative to the preceding one. If you do not check the permission of the parent directory all_squash, the UID and GID of the shared file are mapped to the anonymous user anonymous, which is suitable for public directories. No_all_squash retains the UID and GID of the shared file (default). All requests of the root_squash root user are mapped to the same permissions as those of the anonymous user (default) the no_root_squash root user has full management access permissions to the root directory. anonuid = xxx specifies the NFS server/etc/passwd file. The UIDanongid = xxx specifies the NFS server/etc/passwd file for anonymous users. when the exports file is modified, run the following command to re-mount the settings in/etc/exports without restarting the NFS service: exportfs-arv3. configure iptables: NFS will assign temporary ports for the above three services, therefore, it is difficult to control which ports should be opened on the firewall. So we need to fix several ports. Vim/etc/services is added at the end of the file: mountd 1011/tcp # rpc. mountdmountd 1011/udp # rpc. mount TD opens the relevant ports in the NFS-Server firewall to 1011/tcp, 1011/udp, 111/tcp, 111/udp, 2049/tcp, the six ports 2049/udp. You can manually add the first four ports in the graphic interface, while the last two ports 2049 can be opened by selecting nfs4. 4. Start the portmap service: service portmap restart5. then start the NFS service: service nfs restart. If the portmap service is not started before, the NFS service will be stopped at Starting NFS daemon for a long time. 6. set nfs and portmap to boot automatically: chkconfig-level 345 nfs onchkconfig-level 345 portmap on view the port information related to service running rpcinfo-p 192.168.1.2 program version protocol PORT 100000 2 tcp 111 portmapper100000 2 udp 111 portmapper100011 1 udp 875 rquotad100011 2 udp 875 rquotad100011 1 tcp 875 rquotad100011 2 tcp 875 rquotad100003 2 udp 2049 nfs100003 3 udp 2049 nfs100003 4 udp 2049 nfs100021 1 udp 32769 nlockmgr100021 3 udp 32769 nlockmgr100021 4 Udp 32769 Protocol 1 tcp 32803 nlockmgr100021 3 tcp 32803 nlockmgr100021 4 tcp 32803 nlockmgr100003 2 tcp 2049 nfs100003 3 tcp 2049 nfs100003 4 tcp 2049 nfs100005 1 udp 1011 mountd100005 1 tcp 1011 mountd100005 2 udp 1011 mountd100005 tcp 1011 mountd100005 3 udp 1011 mountd100005 3 tcp 1011 mountd if you only need to configure a simple nfs server, you only need to open three daemon ports: 111 and portmap start ports, which are used to provide nfs port 2049 and NFS start ports and manage Client login to the host. Permission 1011, the configured mountd port, which is mainly used to manage NFS file system permissions 7. the client also needs to install the nfs-utils and portmap software packages and start the portmap service and the netfs service: yum install nfs-utils portmapservice portmap restartservice netfs restartchkconfig-level 345 on8.NFS server after it is successfully started, the client can use the showmount command to test whether the server can be connected: Command Format: showmount-e [hostname | IP]. The showmount command must be installed with the nfs-utils software package. Showmount-e 192.168.1.2 is shown as follows:/home/nfsdir */home/share (everyone) 9. the client creates a mounted Folder: cd/mntmkdir nfs1mkdir nfs210. the client uses the mount command to mount the NFS shared file: mount-t nfs 192.168.1.2:/home/nfsdir/mnt/nfs1mount-t nfs 192.168.1.2: /home/share/mnt/nfs2 Command Format: mount-t nfs server address: Directory share local mount directory point 11. the client can run the df command and the mount command to view the mounting information: mount192.168.1.2:/home/share on/mnt/nfs2 type nfs (rw, addr = 192.168.1.2) 192.168.1.2: /home/nfsdir on/mnt/nfs1 Type nfs (rw, addr = 192.168.1.2) or df-h12. run the following command to unmount the NFS file on the client: umount/mnt/nfs1umount/mnt/nfs213. the client can automatically mount NFS files when the system starts: you need to write the NFS shared directory mounting information to the/etc/fstab/file to automatically mount the NFS shared directory. Edit the/etc/fstab file: vim/etc/fstab and add it to the end, for example, 192.168.1.2: /home/nfsdir/mnt/nfsdir nfs defaults 0 0 Add the NFS to be automatically mounted in/etc/fstab. First, test whether mount-a can be mounted successfully, it indicates that the fstab file syntax is correct and the NFS service is normal. Restart the system and expect it to be automatically mounted. The strange thing is that after the system is started, NFS is not mounted. the system log contains the following information: mount to NFS server '100. 168.1.2 'failed': mount: System Error: No route to host. however, once the system is started, when you try the mount-a command, everything is OK. From this point of view, it should be a network problem. It may be after the network is up, but the route is not fully prepared, or the network is still initializing, then we need to automatically mount the NFS automatic script to bring up a short sleep to ensure that the network can be initialized. Modify/etc/init. d/netfs in (use vim to modify it, do not use a graphical interface )[! -F/var/lock/subsys/portmap] & service portmap startaction $ "Mounting NFS filesystems:" mount-a-t nfs, add the following action $ "Sleeping for 30 secs between nfs4:" Save sleep 30 and test again. OK. You can test it based on the actual environment, but 30 seconds can basically meet the demand. If you can see the following information in the system log, it means everything is normal. kernel: bnx2: eth0 NIC Link is Up, 1000 Mbps full duplexnetfs: Sleeping for 30 secs: succeedednetfs: mounting NFS filesystems: succeeded14. view the RPC status of the current host: rpcinfo-p localhost