1. Story Review
In my previous blog, I built two Web servers and then built an nginx load balancer server on the front end to distribute the requests to two different servers (http://blog.51cto.com/superpcm/2095324). The previous test is not a problem because the test program is a static web site that is purely static and does not change. Later I set up on both the web to build a WordPress service, and then upload pictures when you do the test. found that the image upload only to one of the servers (such as WEB01), when I close the WEB01, WEB02 can not see the picture.
This is not possible, the reason is that this is two servers, even if the load balancer, or two separate servers, and not as the database as the active replication. Unless our WordPress program or involved in the file upload, file changes, are placed in a common directory, so that can be. Below we use NFS to build such a shared directory for WEB01 and WEB02 to share.
2. Delete the original data and program
(1) to the database delete, mysql01 or 02 on the line, will automatically synchronize the
(2) Delete the website content of the web hosting blog on WEB01 and WEB02 respectively
rm-rf/usr/local/nginx/html/blog/
3.NFS Introduction
NFS is the abbreviation for the network file system, and the Chinese name is the Web filesystem. Its main function is to allow the sharing of files or directories between different host systems via a network (usually a local area network). NFS clients can mount the NFS server-side shared data directory to the NFS client local system. From the client's local perspective, the NFS server-side shared directory is like the client's own disk partition or directory, but is actually the directory of the remote NFS servers.
The NFS Network File system is much like the network share, security features, and network drive mappings of Windows systems, which are similar to the Samba services of Linux systems. Just mentioned two are mainly used in the office LAN sharing, and the Internet small and medium-sized web site cluster backend commonly used NFS for data sharing, if it is a large site, then it is possible to use more complex distributed file system, such as Glusterfs, have the opportunity to introduce.
4.NFS Application Scenarios
In the Enterprise Cluster Architecture, NFS Network File system is generally used to store the shared video, pictures, attachments and other static resource files, usually the site users upload files will be placed in the NFS share, and then all the front-end nodes access to these static resources can read the NFS storage resources. NFS server location, such as the previous plot is not used before NFS, WEB02 can not access the user uploaded images; After the image is NFS, WEB01 and WEB02 share the user's uploaded images.
Introduction of 5.NFS System principle
As shown, the NFS server sets up a shared folder and then sets permissions, and other privileged NFS clients can access the directory and mount it to their directory with mount. After mounting, the DF command is used to view the basic information that is consistent with the local disk.
Before we said that NFS transmits data over a network, what Ports does NFS use to transmit data? In fact, the ports used for NFS to transmit data are random. The NFS client knows that the port on the NFS server is implemented by a protocol called RPC (Remote Procedure call).
Because NFS supports more features, one port does not meet these features, so many ports are used. A single port is used for each function that starts, so it's random. In order to solve the problem of inability to communicate with this randomness, it needs to be solved by RPC service. RPC records each NFS corresponding port number and, when requested by the NFS client, passes the port and function information to the NFS client requesting the data, ensuring that the client is properly connected to the NFS port. The NFS server can be used as a property, and the NFS client as a tenant and RPC as an intermediary to understand the relationship.
6. Deploying NFS Services
Note that the version of this deployment of the Linux service is CentOS6.5, and then the firewall opens all ports for the same network segment. Deployment of NFS Below
(1) Use Yum to install NFS and RPC packages, both server and client.
Yum Install nfs-utils rpcbind-y
(2) Start the Rpcbind and NFS services and add the two services to the boot-up (only on the server)
/etc/init.d/rpcbind start/etc/init.d/nfs Startecho "/etc/init.d/rpcbind start" >>/etc/rc.localecho "/etc/ Init.d/nfsstart ">>/etc/rc.local
You can see that several ports are open, where RPC port 111 is not variable
View the port information that is registered to the RPC service by the NFS service, which can be seen after NFS startup
(3) Overview of common NFS processes
[[email protected] ~]# ps aux |egrep "Nfs|rpc" rpc 966 0.0 0.0 18976 956 ? ss 16:30 0:00 rpcbindrpcuser 984 0.0 0.1 23348 1364 ? Ss 16:30 0:00 rpc.statd #<= checking the consistency of files root 1052 0.0 0.0 0 0 ? s 16:30 0:00 [ rpciod/0]root 1060 0.0 0.1 21784 1380 ? ss 16:30 0:00 rpc.mountdx #<= Rights Management validation and so on root 1066 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd4]root 1067 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd4_callbacks]root 1068 0.0 0.0 0 0 ? s 16:30 0:00 [ Nfsd] &nbSp; #<=nfs Main Process root 1069 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd] root 1070 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd]root 1071 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd]root 1072 0.0 0.0 0 &Nbsp; 0 ? s 16:30 0:00 [nfsd]root 1073 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd]root 1074 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd]root 1075 0.0 0.0 0 0 ? S 16:30 0:00 [nfsd] #<=nfS main process root 1097 0.0 0.0 25164 740 ? Ss 16:31 0:00 rpc.idmapd #<= Name Mapping background process
(4) Create a new folder to be shared on the NFS server
Mkdir/webdatatouch/webdata/111.txt Create a new test file
(5) Create a new Nginx user on the NFS server, and then modify the owning user of the shared folder
Because the Web server already has the Nginx user, and two Web server nginx user uid, GID is 501, on the Server for NFS also new such a user.
Useradd-u 501-s/sbin/nologin-m nginxchown-r Nginx:nginx webdata/
(6) NFS Server Modify configuration file exports, re-enable NFS service
The format for the NFS configuration file/etc/exports file is:
NFS Shared directory NFS Client address 1 (parameter 1, parameter 2 ...) ) Client Address 2 (parameter 1, parameter 2 ...). )
|
This parameter file means: The shared directory is/webdata Allow 192.168.31.0 This segment of the Client Access to NFS, the permissions are readable writable, data synchronization to the service side of the disk, and specify the user's UID and GID (this UID and GID must be both server and client) specific NFS parameters can be understood under Baidu.
After checking the syntax of the exports file, restart the NFS service
EXPORTFS-RV #检查exports文件的语法/etc/init.d/nfs Restart
(7) Start the RPC service, mount the directory on the client to see if it succeeds (NFS and RPC software installed earlier)
/etc/init.d/rpcbind startmount-t NFS 192.168.31.30:/webdata/usr/local/nginx/html/blog/
After a successful mount, you can see the newly created empty file 111.txt, you can view the mount with DF
(8) The NFS client creates a new test file 222.txt to see if the owner is nginx and then deletes both test files
(9) The RPC service and Mount command are added to boot (all NFS clients execute) and the NFS service is built here.
echo "/etc/init.d/rpcbind start" >>/etc/rc.localecho "Mount-t NFS 192.168.31.30:/webdata/usr/local/nginx/html/ blog/">>/etc/rc.local
7. Build WordPress Blog Program under NFS shared directory
This does not explain how to build, you can refer to my blog http://blog.51cto.com/superpcm/2092937, and then test the picture of the problem, successfully solved.
PS: The reason why the entire WordPress is installed in the shared NFS directory is for convenience, it is better to learn that those directories are uploaded images and files directory, and then put those directories in the NFS share directory.
Build NFS shared directory to solve WordPress load balancer image upload problem