As the number of users continues to rise, I have implemented scalable and highly reliable cluster deployment (lvs + keepalived) for applications with high access volumes ), however, some users still report slow access. The root cause of the problem is shared storage server NFS. In my network environment, N servers share the storage space of one server through nfs, so that the NFS server is overwhelmed. Check the system log. All errors such as nfs service timeout are reported. Generally, when the number of nfs clients is small, the NFS performance will not be affected. Once the number of NFS servers is too large and the read/write operations are frequent, the result is not what we expect. The following figure shows how a cluster is shared with nfs:
In addition to performance problems, this architecture also has a single point of failure. Once the NFS server fails, all applications that provide data by sharing will no longer be available, although data is synchronized to another server using rsync for backup of the nfs service, this does not help improve the performance of the entire system. Based on such a requirement, we need to optimize the nfs server or adopt other solutions. However, the optimization cannot meet the performance requirements of the increasing number of clients, therefore, the only choice is to adopt other solutions. through research, Distributed File System is a suitable choice. After a distributed file system is used, data access between servers is no longer a one-to-many relationship (one NFS server and multiple NFS clients), but a many-to-many relationship, performance is greatly improved without any problems.
So far, there are dozens of Distributed File System solutions available, such as lustre, hadoop, and Pnfs. I have tried PVFS, hadoop, and moosefs. I have referred to many technical implementation methods such as lustre and KFS, and finally chose moosefs (MFS) this distributed file system serves as my shared storage server. Why choose it? Let me talk about some of my views:
1. easy to implement. MFS installation, deployment, and configuration are much simpler and easier than other tools. Take a look at lustre's pdf documentation on more than 700 pages.
2. Non-stop service expansion. After the MFS framework is ready, increase the server capacity at any time. Expansion and capacity reduction will not affect the existing services. Note: hadoop also implements this function.
3. Easy to restore services. In addition to the high availability features of MFS, manual service recovery is also very quick, for the reason, refer to 1st.
4. I am very grateful for getting help from the author during the experiment.
MFS features (translated by official website)
★High reliability (data can be divided into several copies and stored in different computers)
★Dynamically expand available disk space by adding a computer or a new hard disk
★You can set the space recovery time for deleting objects.
[Root @ mysql-bk serydir] # mfsgettrashtime bind-9.4.0.tar.gz
Bind-9.4.0.tar.gz: 600
After the file is deleted for 10 minutes (600 seconds), the file is deleted and the disk space is recycled.
★Create a snapshot for a file
Composition of MFS File System
1. metadata server. The system is responsible for managing the file system. Currently, MFS only supports one metadata server master, which is a single point of failure and requires a stable server. We hope that MFS will support multiple master servers in the future to further improve system reliability.
2. Metadata log server. Back up the change log file of the master server. The file type is changelog_ml. *. mfs. When data on the metadata server is lost or damaged, files can be retrieved from the log server for recovery.
3. Data Storage Server chunkserver. Servers that actually store user data. When storing a file, you first divide the file into blocks, and then copy these blocks between the chunkserver of the Data Server (the number of copies can be manually specified. We recommend that you set the number of copies to 3 ). There can be multiple data servers, and the larger the quantity, the larger the "disk space" that can be used, the higher the reliability.
4. Client. The host that uses the MFS file system to store and access is called the MFS client. After successfully mounting the MFS file system, you can share this virtual storage as you used NFS.
Metadata Server installation and configuration
The metadata server can be linux or unix. You can choose an operating system based on your usage habits. In my environment, freebsd is used as the runtime platform for MFS metadata. The GNU source code is basically the same for installation on various types of unix platforms.
(1) install the metadata service
1. Download the GNU source code
Wget http://ncu.dl.sourceforge.net/project/moosefs/moosefs/1.6.11/mfs-1.6.11.tar.gz
2. Unpack tar zxvf mfs-1.6.11.tar.gz
3. Switch directory cd mfs-1.6.11
4. Create useradd mfs-s/sbin/nologin
5. configure./configure -- prefix =/usr/local/mfs -- with-default-user = mfs -- with-default-group = mfs
6. Compile and install make; make install
(2) configure the metadata service
The configuration file of the metadata server is stored in the installation directory/usr/local/mfs/etc. Unlike the mfs-1.5.12 version, the mfs-1.6.x version is installed with only a template file, with a suffix such as mfsmaster. cfg. dist. To make the mfs master work normally, two configuration files mfsmaster. cfg and mfsexports. cfg are required. The former is the main configuration file, and the latter is the permission control file (used when the mfs client is connected ).
(1) primary configuration file mfsmaster. cfg, which can be directly copied from the template file. Open the configuration file/usr/local/mfs/etc/mfsmaster. cfg:
# WORKING_USER = mfs
# WORKING_GROUP = mfs
# SYSLOG_IDENT = mfsmaster
# LOCK_MEMORY = 0
# NICE_LEVEL =-19
# EXPORTS_FILENAME =/usr/local/mfs/etc/mfsexports. cfg
# DATA_PATH =/usr/local/mfs/var/mfs
# BACK_LOGS = 50
# REPLICATIONS_DELAY_INIT = 300
# REPLICATIONS_DELAY_DISCONNECT = 3600
# MATOML_LISTEN_HOST = *
# MATOML_LISTEN_PORT = 9419
# MATOCS_LISTEN_HOST = *
# MATOCS_LISTEN_PORT = 9420
# MATOCU_LISTEN_HOST = *
# MATOCU_LISTEN_PORT = 9421
# CHUNKS_LOOP_TIME = 300
# CHUNKS_DEL_LIMIT = 100
# CHUNKS_WRITE_REP_LIMIT = 1
# CHUNKS_READ_REP_LIMIT = 5
# REJECT_OLD_CLIENTS = 0
# Deprecated, to be removed in MooseFS 1.7
# LOCK_FILE =/var/run/mfs/mfsmaster. lock
Although each line is commented out, they are the default values in the configuration file. to change these values, you need to cancel the comment and specify the values explicitly. The following describes the meanings of some projects.
◆ EXPORTS_FILENAME =/usr/local/mfs/etc/mfsexports. cfg permission control file storage location.
◆ DATA_PATH =/usr/local/mfs/var/mfs data storage path, where only metadata is stored. So what are the data? Go to the directory and check whether there are three types of files:
These files must also be stored in directories related to other data storage servers.
◆ MATOCS_LISTEN_PORT = 9420 MATOCS -- master to chunkserver, that is, the metadata server uses the listening port 9420 to accept connections from the chunkserver of the data storage server.
◆ MATOML_LISTEN_PORT = 9419 MATOML --- master to metalogger, used to back up the change logs of the metadata server. Note: previous versions of the Mfs-1.5.12 do not have this project.
◆ MATOCU_LISTEN_PORT = 9421 the metadata server listens on port 9421 to accept the client's remote connection to MFS (the client mounts MFS with mfsmount)
◆ It is not difficult to understand the meaning of other words. There are also several time-related values in seconds.
This configuration file can work without modification.
(2) the configuration file/usr/local/mfs/etc/mfsexports. cfg can also be copied directly from the template file. The content of this file is very similar to the exports file of the NFS server. in actual configuration, you can refer to the default line of this file to modify it to meet your application requirements. my mfsexports. the content of the cfg file is:
192.168.93.0/24/rw
(3) copy an object
Cp/usr/local/mfs/var/mfs/metadata. mfs. empty/usr/local/mfs/var/mfs/metadata. mfs
This is an 8-byte file that adds a project to the mfs-1.6.x.
(3) metadata server master startup
The metadata server can be started independently, even if there is no data storage server (chunkserver), it can work normally. Therefore, after installing and configuring MFS, we can start it. Run the command/usr/local/mfs/sbin/mfsmaster start. If there is no accident, the metabase server should run as a daemon. Now we can check the running status of the MFS master in three ways:
1. Check the process
2. Check the network status
3. Check System Logs
MFS logs are directly written into system logs. These records can be found in system logs when the chunkserver or chunkserver fails. Note that this log is not the same as the metadata change log.
(4) disable the metadata server
To disable the metadata server, you must use the/usr/local/mfs/sbin/mfsmaster-s method. If you directly use kill to kill the process, the relevant files cannot be found at the next startup, the server cannot be started normally. Be cautious about this. Of course, if this happens, it can still be restored through mfsmetastore.
Metadata Log Server installation and configuration
The metadata Log service is a new service added after mfs 1.6, that is, metadata logs can be stored on the metadata server or separately. To ensure its reliability, it is best to place it separately. Note that the source data log daemon is on the same server as the metadata server (master), and the server that backs up metadata logs serves as its client, obtain log files from the metadata server for backup.
(1) install metalogger
1. Download the GNU source code
Wget http://ncu.dl.sourceforge.net/project/moosefs/moosefs/1.6.11/mfs-1.6.11.tar.gz
2. Unpack tar zxvf mfs-1.6.11.tar.gz
3. Switch directory cd mfs-1.6.11
4. Create useradd mfs-s/sbin/nologin
5. configure./configure -- prefix =/usr/local/mfs -- with-default-user = mfs -- with-default-group = mfs
6. Compile and install make; make install
(2) metalogger Configuration
This service only needs one configuration file. Here we only need to copy one from the template file and then modify it slightly. below is the configuration file of one of my metalogger:
[Root @ hynfs-2 etc] # more mfsmetalogger. cfg
# WORKING_USER = mfs
# WORKING_GROUP = mfs
# SYSLOG_IDENT = mfsmetalogger
# LOCK_MEMORY = 0
# NICE_LEVEL =-19
# DATA_PATH =/usr/local/mfs/var/mfs
# BACK_LOGS = 50
# META_DOWNLOAD_FREQ = 24
# MASTER_RECONNECTION_DELAY = 5
MASTER_HOST = 192.168.93.18
MASTER_PORT = 9419
# MASTER_TIMEOUT = 60
# Deprecated, to be removed in MooseFS 1.7
# LOCK_FILE =/var/run/mfs/mfsmetalogger. lock
The only modification to this configuration file is MASTER_HOST, which must be the host name or IP address of the metadata server. In addition, for your convenience, I will briefly describe the other items in the configuration file:
(1) SYSLOG_IDENT = mfsmetalogger: The identifier output in the system log when the metadata Log service is running. Below is a system log:
(2) DATA_PATH =/usr/local/mfs/var/mfs capture the file from the metadata server (master) and store it in the path.
(3) BACK_LOGS = 50 indicates that the total number of backup logs is 50. If the number exceeds 50, the system rotates. When restoring metadata, you only need to back up the latest log file. Therefore, the default number of logs is sufficient, which ensures that the log backup will not fill the entire partition.
(4) META_DOWNLOAD_FREQ = 24 frequency of metadata backup file download requests. The default value is 24 hours. That is, a metadata. mfs. back file is downloaded from the metadata server (MASTER) every other day. When the metadata server is closed or fails, the matedata. mfs. back file will disappear. To restore the entire mfs, You need to retrieve the file from the metalogger server. Pay special attention to this file. It can be used together with the log file to restore the entire corrupted distributed file system.
(3) metalogger operation and Shutdown
1. the startup process is as follows:
/Usr/local/mfs/sbin/mfsmetalogger start
Working directory:/usr/local/mfs/var/mfs
Lockfile created and locked
Initializing mfsmetalogger modules...
Mfsmetalogger daemon initialized properly
If you cannot communicate with the metadata server during the startup process, the system will provide an error message.
2. Close the service and run/usr/local/mfs/sbin/mfsmetalogger stop.
3. Check the service running status. From two aspects, one is the metadata server, and the other is the data generation.
◆ Check the network connection of the metadata server. You can see the tcp port 9419 connecting the log server to the metadata server.
◆ Check the working directory of the log server. Normally, you can see that a file has been generated (obtained from the metadata server ). You can manually copy a log file from the metadata server to compare the file content.
Data storage chunkserver installation Configuration
The Data Storage Server chunkserver can also run on various types of unix platforms. How many servers can be cluster in an MFS environment? The author's statement is about the PB capacity. I suggest you use more than three servers. It is also specially used for storage, do not create a machine with the master. (theoretically, the implementation is acceptable, but this is not a good strategy ). Because the installation and configuration of each data storage server are the same, you only need to perform operations on one server.
(1) install the data storage server chunkserver
1. Download the GNU source code
Wget http://ncu.dl.sourceforge.net/project/moosefs/moosefs/1.6.11/mfs-1.6.11.tar.gz
2. Unpack tar zxvf mfs-1.6.11.tar.gz
3. Switch directory cd mfs-1.6.11
4. Create useradd mfs-s/sbin/nologin
5. configure./configure -- prefix =/usr/local/mfs -- with-default-user = mfs -- with-default-group = mfs
6. Compile and install make; make install
(2) configure the data storage server chunkserver
The data storage server has two configuration servers to be modified. One is the primary configuration file mfschunkserver. cfg, and the other is mfshdd. cfg. The space allocated by each server to MFS is preferably a separate hard disk or a raid volume. The minimum requirement is a partition. The example given by the author is to create a large file and mount it locally. This is not a good idea. It can only be used for experiments.
1. modify the configuration file/usr/local/mfs/etc/mfschunkserver. cfg. The modified configuration file is as follows:
# WORKING_USER = mfs
# WORKING_GROUP = mfs
# DATA_PATH =/usr/local/mfs/var/mfs
# LOCK_FILE =/var/run/mfs/mfschunkserver. pid
# SYSLOG_IDENT = mfschunkserver
# BACK_LOGS = 50
# MASTER_RECONNECTION_DELAY = 30
MASTER_HOST = 192.168.0.19
MASTER_PORT = 9420
# MASTER_TIMEOUT = 60
# CSSERV_LISTEN_HOST = *
# Csserver_listen_port = 9422
# CSSERV_TIMEOUT = 60
# CSTOCS_TIMEOUT = 60
# HDD_CONF_FILENAME =/usr/local/mfs/etc/mfshdd. cfg
In this configuration file, "#" is the modified item. The following describes the meanings of some items:
◆ MASTER_HOST = 192.168.0.19 the name or address of the metadata server, which can be the host name or IP address, as long as the data storage server can access the metadata server.
◆ LOCK_FILE =/var/run/mfs/mfschunkserver. pid is identical to that of the metadata server master.
◆ CSSERV_LISTEN_PORT = 9422 CSSERV-chunkserver. This listening port is used to connect to other data storage servers, usually data replication.
◆ HDD_CONF_FILENAME =/usr/local/mfs/etc/mfshdd. cfg the location of the disk space configuration file allocated to MFS.
2. modify the configuration file/usr/local/mfs/etc/mfshdd. cfg. On my server, there is only one 1 tb sata hard disk, which is divided into a GB partition as part of the MFS storage service. In order for mfs to have the write directory permission, you need to modify the directory owner. The partition mount point of my server is/data. Use chown-R mfs: mfs/data to change the owner. Because each server only needs to contribute a partition as MFS, the configuration file only needs the following line of content:
/Data
This file contains several lines by default. We 'd better delete it, because the "#" symbol does not seem to work normally.
(3) Start the data storage server chunkserver
Run the command/usr/local/mfs/sbin/mfschunkserver start on the chunkserver to start the data storage daemon. Check the running status of the chunkserver in the following ways.
1. view the process ps aux | grep mfschunkserver
2. Check the network status. Normally, you can see that 9422 is in the listening status. If another data storage server chunkserver is running under the master management of the same metadata server, you should be able to see the connection between other chunkservers and the local machine:
3. view the system logs of the metadata server. You can see that the new data storage server chunkserver is added.
Tail-f/var/log/messages
Mar 27 14:28:00 mfs-ctrl mfsmaster [29647]: server 3 (192.168.0.71): usedspace: 65827913728 (61 GB), totalspace: 879283101696 (818 GB), usage: 7.49%
(4) disable the data storage server
Similar to the metadata server master, the chunkserver service stops when the command/usr/local/mfs/sbin/mfschunkserver-s is executed. To enable the system to automatically start the chunkserver service during the restart process, you can go to/etc/rc. the local file append row/usr/local/mfs/sbin/mfschunkserver start to achieve this purpose (the automatic restart process of the master can also be handled as well ).
MFS Client installation and configuration
In my production environment, there are only centos and freebsd environments. Therefore, only centos and freebsd are attached to the MFS file system. Other types of unix systems will be available soon. Compared with the previous operation process, it is the most time-consuming to use the MFS cluster file system after the client is mounted.
1. centos serves as the MFS client.
(1) install the MFS Client
◆ Mfsmount needs to rely on FUSE, so you need to install fuse first, here I choose fuse-2.7.4.tar.gz.
1. Unpack tar zxvf fuse-2.7.4.tar.gz
2. Switch the cd fuse-2.7.4 directory.
3. configure./configure
4. Compile and install make; make install
Skip this step if the system has already installed fuse.
◆ Install the MFS client program
1. Modify the environment variable file/etc/profile, append the following line, and then execute the command source/etc/profile to make the modification take effect.
Export PKG_CONFIG_PATH =/usr/local/lib/pkgconfig: $ PKG_CONFIG_PATH
If you do not perform this operation, execute the command during MFS installation.
. /Configure -- enable-mfsmount may appear "checking for FUSE... no configure: error: mfsmount build was forced, but fuse development package is not installed "errors, rather than correctly installing the MFS client program.
2. Unpack tar zxvf mfs-1.6.11.tar.gz
3. Switch directory cd mfs-1.6.11
4. Create useradd mfs-s/sbin/nologin
5. configure./configure -- prefix =/usr/local/mfs -- with-default-user = mfs -- with-default-group = mfs -- enable-mfsmount
6. Compile and install make; make install
◆ Check the MFS Client installation result. By viewing the files in the/usr/local/mfs/bin directory, you should find the following files:
(2) Mount and use the MFS File System
1. Create a mount point mkdir/mnt/mfs
2. Mount MFS/usr/local/mfs/bin/mfsmount/mnt/mfs-H 192.168.0.19. note that all MFS are attached to the same metadata server master, rather than other data storage servers chunkserver!
3. Check whether the disk is attached successfully by checking the disk usage.
[Root @ mysql-bk ~] # Df-h
Filesystem Size Used Avail Use % Mounted on
/Dev/hda1 19g 2.7G 16G 15%/
/Dev/hda7 51G 180 M 48G 1%/backup
/Dev/hdc1 145G 6.4G 131G 5%/data
/Dev/hda5 19G 173 M 18G 1%/home
/Dev/hda3 24G 217 M 23G 1%/var
/Dev/hda2 29G 1.6G 26G 6%/usr
Tmpfs 1.7G 0 1.7G 0%/dev/shm
Mfs #192.168.0.19: 9421 2.5 T 256G 2.2 T 11%/mnt/mfs
4. Go to the/mnt/mfs directory and upload a file. Is it normal? Then, manually use touch to create a file and then delete it to check whether the operation is normal.
5. Set the number of file copies. It is recommended that three copies be used.
Set the number of replicas
Mfsrsetgoal 3/mnt/mfs
Check whether the settings are as expected.
Mfsgetgoal/mnt/mfs/serydir/bind-9.4.0.tar.gz
/Mnt/mfs/serydir/bind-9.4.0.tar.gz: 3
6. Set the space recycling time after the file is deleted. The default recovery time is 7 days (604800 seconds)
Change recovery time to 10 minutes
Mfsrsettrash time 600/mnt/mfs
6. append the mounting command to the file/etc/rc. local to enable automatic startup of mounting MFS.
Ii. freebsd as the MFS Client
Freebsd installs and mounts the MFS cluster file system, which is more complex than centos operations. mfsmount depends on fuse and needs to load the fusefs module in the kernel.
(1) install fuse
1. Unpack tar zxvf fuse-2.7.4.tar.gz
2. Switch the cd fuse-2.7.4 directory.
3. configure./configure
4. Compile and install make; make install
Skip this step if the system has already installed fuse.
(2) install the kernel module fusefs-kmod
1. Execute the system command sysinstall
2. Select Configure with the cursor to go to the next step.
3. Select Packages to go to the next step.
4. Select "FTP" as the installation source to go to the next step.
5. Select "kld" and press enter to execute the default action "[OK]" to go to the next step to select the software package.
6. Select fusefs-kmod-0.3.9.p1_2 and press [OK] to return to the operation interface that appears in step 4. At this time, we use the "Tab" key to select "Install" on the right of the bottom. After the installation is completed, a prompt is displayed, indicating that the installation is successful and disappears instantly.
◆ Load the fusefs module kldload/usr/local/modules/fuse. ko. If the loading fails, check whether the module File fuse. ko exists.
◆ Check whether the fusefs module is loaded to the kernel:
If there is no output similar to the preceding figure, the fusefs module is not loaded successfully.
(3) Installation Package pkg-config:
1. cd/usr/ports/devel/pkg-config
2. make install clean
(4) install the MFS Client
1. Unpack tar zxvf mfs-1.6.11.tar.gz
2. Switch directory cd mfs-1.6.11
3. create user pw useradd mfs-s/sbin/nologin
4. configure./configure -- prefix =/usr/local/mfs -- with-default-user = mfs -- with-default-group = mfs -- enable-mfsmount
5. Compile and install make; make install
◆ Check the MFS Client installation result. By viewing the files in the/usr/local/mfs/bin directory, you should find the following files:
(5) Mount and use the MFS File System
1. Create a mount point mkdir/mnt/mfs
2. Mount MFS/usr/local/mfs/bin/mfsmount/mnt/mfs-H 192.168.0.19. note that all MFS are attached to the same metadata server master, rather than other data storage servers chunkserver!
3. Check whether the disk is attached successfully by checking the disk usage.
[Root @ mysql-bk ~] # Df-h
Filesystem Size Used Avail Capacity Mounted on
/Dev/ad4s1a 26g 570 M 24G 2%/
Devfs 1.0 K 1.0 K 0B 100%/dev
/Dev/ad4s1g 356G 157G 170G 48%/data
/Dev/ad4s1f 17G 215 M 15G 1%/home
/Dev/ad4s1d 28G 1.1G 25G 4%/usr
/Dev/ad4s1e 24G 362 M 21G 2%/var
// Dev/fuse0 2.5 T 256G 2.2 T 11%/mnt/mfs
5. Go to the/mnt/mfs directory. We can see the files uploaded to the Distributed File System MFS using centos in the previous step.
6. Set the number of file copies. It is recommended that three copies be used.
Set the number of replicas
Mfsrsetgoal 3/mnt/mfs
Check whether the settings are as expected.
Mfsgetgoal/mnt/mfs/serydir/bind-9.4.0.tar.gz
/Mnt/mfs/serydir/bind-9.4.0.tar.gz: 3
6. Set the space recycling time after the file is deleted. The default recovery time is 7 days (604800 seconds)
Change recovery time to 10 minutes
Mfsrsettrash time 600/mnt/mfs
(6) automatic mounting of MFS
Create the/etc/rc. local file and add the following content :.
#! /Bin/sh
/Sbin/kldload/usr/local/modules/fuse. ko
/Usr/local/mfs/bin/mfsmount-h 192.168.0.19
The system can be automatically mounted to the MFS file system after it is started or restarted.
Destructive Testing
I. Test the data storage server
I used five servers to form the MFS storage platform, one of which is the master and the other four are the chunkserver. stop a chunkserver service, copy data in a MFS client to the directory (/mnt/mfs) of the mount point, create a directory/file, read a file, or delete a file, check whether the operation is normal. Stop the first 2nd chunkservers, repeat the above operations, and then stop the first 3rd servers to perform similar File Read operations. After the chunkserver test is reduced, we will gradually add the chunkserver server, and then perform read/write and other related access operations on MFS to verify its correctness.
By increasing or decreasing the chunkserver server test, the service reliability is indeed good. Even if only the last server is left, the storage access service can be provided normally.
Ii. Test the metadata server
The most important file on the metadata server is in the/usr/local/mfs/var/mfs directory. Every Data Change in MFS is recorded in the file in this directory, we can back up all the files in this directory to ensure the reliability of the entire MFS file system. under normal circumstances, the log files changed by the metadata server (changelogs) are automatically copied to all data storage servers in real time and. *. mfs format. In other words, even if the metadata server is scrapped, another metadata server can be deployed and the files required for recovery can be obtained from the chunkserver.
(1) local test
1. Stop metadata service/usr/local/mfs/sbin/mfsmaster
2. Backup metadata server data cd/usr/local/mfs/var; tar czvf mfs. tgz mfs
3. Delete the mv mfs. bk or rm-rf mfs directory.
4. start metadata service ../sbin/mfsmaster start failed, prompting that data cannot be initialized.
5. Unpack tar zxvf mfs. tgz
6. Execute the restore operation ../sbin/mfsmetarestore-
7. start metadata service ../sbin/mfsmaster start
8. On the MFS client, check whether the data stored in MFS is consistent with that before recovery? Normal access or not.
(2) migration test
1. install a new MFS metadata server.
2. Copy the backup file metadata from the current metadata server (master) or log Backup Server (mfsmetalogger. mfs. back/metadate_ml.mfs.back to the new meta-Server Directory (metadata. mfs. back needs to be backed up with crontab regularly ).
3. copy the data directory of the metadata server (/usr/local/mfs/var/mfs) from the current metadata server (master) or log Backup Server (mfsmetalogger) to the new metadata server.
4. Stop the original metadata server (shut down the computer or stop its network service ).
5. Change the ip address of the new metadata server to the ip address of the original server.
6. Run the mfsmetarestore-m metadata. mfs. back-o metadata. mfs changelog_ml. *. mfs to start the new metadata service.
7. start the new metadata service/usr/local/mfs/sbin/mfsmaster start
8. On the MFS client, check whether the data stored in MFS is consistent with that before recovery? Normal access or not.
References:
Http://bbs.chinaunix.net/thread-1643863-1-1.html
Http://sery.blog.51cto.com/10037/263515/ http://bbs.chinaunix.net/forum-240-1.html