Detailed description of real-time data images in Linux

Source: Internet
Author: User
Tags fam
For details about real-time data images in Linux operating systems-Linux Enterprise applications-Linux server applications, see the following. Summary

In this article, we will discuss how to replace the expensive SAN (Storage Area Network (such as GFS) or Network block device for Data Replication in Linux. We used FAM (file change monitoring module) and IMON (information node monitoring module) in the replication system. Both systems were originally developed for IRIX by SGI.

SGI employees are really Cool. They transplanted the two programs to Linux and published the source code.

When the cost is no longer a problem, you can use GFS (Global File System) and SAN to implement real-time data mirroring; otherwise, data sharing and many other options are indispensable.

There are several methods to choose from. In this article, we will discuss these methods, and you will see their respective advantages and disadvantages.

   Why replace sharing with replication?

Is it assumed that the file server does not provide shared data to the client? Yes, it is assumed that the work environment is indeed like this. If the file server we are using is shared files through software such as NFS or SMB, there will be a "bottleneck" and "Key Points causing system faults" in the system ". If shared data is shared through shared storage devices (SAN or multi-channel SCSI) on GFS, this configuration is not only expensive, but also becomes the "Key to system failure ". You can also use NBD (network block device) to create a network image, but this is not a common method. NBD itself has some limitations and is very difficult to set and manage, if you only want to copy data between a few WEB servers, using NBD will only bring you more trouble.

   As simple as possible

Okay. Let's try copying it.

Solution 1:

One of the two WEB servers is the master server and the other is the backup server. It is very easy to keep the files on the backup server the same as those on the master server.
 
But how can we make it work automatically? The user replicates data to the master server through FTP multiple times a day. But what happens when an error occurs on the master server and the backup server takes over the system? Because this backup operation is not real-time, the data on the backup server must be inconsistent with that on the master server. In this case, the Administrator will be very angry with J. Of course, you can run a timer daemon every 5 seconds. "rsync? Av? Delete source destination ", but this will increase the load on the machine and affect the operation of the system.

Solution 2:

One FTP server is used to save and update web data, and the six web servers use dns rotation to achieve load balancing. This ensures that the data on each server is the same. In this way, we can avoid using NFS, but such a solution is not satisfactory.

So what is the best solution? It should be "copy the file to each web server only when the data changes". If the file does not change, nothing will be done. This is what we use fam to do.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.