MongoDB Replication (synchronous) implementation

Source: Internet
Author: User

? I. master-slave replication

• A. Primary server: ~/work/mongodb-osx-x86_64-2.6.2/bin/mongod--dbpath./db/--logpath./log--logappend--fork --port 27017--master--oplogsize=64

• B. from the server:./mongod--dbpath. /db/--logpath. /log/s.log--fork--port 27018--slave--slavedelay 5--autoresync--source localhost:27017

• C. startup options

• --master, specifying the master node

• --slave, specifying the slave node

• --fastsync, starting from a node based on the data snapshot of the master node, starting with this option is much faster than doing a full synchronization

• --oplogsize arg, specifying the size of the primary node Oplog (MB)

• --source arg <serverIp:port>, specify the From node address on the master node and specify the primary node address on the slave node

• --only arg, specifying replication-specific databases on the slave node, copying all databases by default

• --slavedelay Argto set the delay time for synchronizing the master node data from the node

• --autoresync, automatic resynchronization if master and slave nodes are not synchronized

• dynamic Add from server

The primary node can be dynamically added or removed from the shell by command at boot time, instead of specifying the primary node address, but after startup:

Db.sources.insert ({' Host ': ' Ip:port '});

Db.sources.remove ({' Host ': ' Ip:port '});

Db.sources.find ()

• E. Note:

It should be noted that the master and slave cannot use the same data directory and port. MONGO also supports a master multi-slave model, but the same can cause performance impact on the main. Of course , MONGO also supports dual Master mode, but the dual Master mode has always been the case that data synchronization delay caused one party's data to be flushed out, according to the situation flexible use.

? II. copy set (replica set)

Replication set (replica set) can be automatically restored, ensuring that there is always an active node (primary) available in the cluster and one or more backup nodes (secondary)

• A. start

1. ./mongod--dbpath/data/db1/--logpath mo1.log--replset shard1--port 27017

2. ./mongod--dbpath/data/db2/--logpath mo2.log--replset shard1--port 27018

--REPLSET Specifies the name of the replica set, which allows you to place different instances in the same replication set. There are two instances added above, of course you can add multiple instances. Starting a replication set but not yet available, initialization is required.

• B. Initialize

In the shell, connect any one of the above two instances, and initialize it by typing the following command:

1. cfg={_id: ' Shard1 ', members:[

2. {_id:0,host: ' 127.0.0.1:27017 '},

3. {_id:1,host: ' 127.0.0.1:27018 '}]

4. }

5. rs.initiate (CFG)

CFG is the configuration information,_id Specifies the name of the replica set, and the membersspecify the instance in the replication set. The rs.initiate command is initialized to return the success of initialization and the associated information. After the initialization is successful, theshell displays whether you are using Primary or secondary, or you can use rs.status () to view the status information, and the provisioning information for the replica set is stored in the Local.system.replset in the collection.

Log in to the secondary node and enter Rs.slaveok () before normal query.

• C. Adding nodes

./mongod--dbpath/data/db3/--logpath mo3.log--replset shard1--port 27019

Then log in to the primary node and execute the following command:

1. rs.add (' 127.0.0.1:27019 ')

2. Rs.reconfig (rs.conf ())

• node type

• standard, regular node, store complete data backup, participate in election voting, may become active node;

• passive, storing complete data backups, participating in election polls, and not being an active node;

• Arbiter, only participates in voting, does not store data, cannot become an active node.

Each node has a priority property, the range is 0~1000, the default value is 1, and its value determines whether the node is standard or passive, and if 0 is passive, Otherwise it is standard.

The standard node also depends on the value of priority from large to small to determine who can become the new active node, if the priority value of multiple nodes is the same, then see which node's data is relatively new.

When you add a node, you can set the node's property arbiteronly to True, and then set the node to arbiter.

• III. The principle of replication

The master node counts the actions in oplog, copies each document from the master node when the node is started, and then obtains the oplog of the master node and performs the operations in order to achieve data synchronization. Oplog is saved in the Local.oplog collection, the master node maintains a Syncedto property, the primary node passes through the Db.slaves.find (), and the node is db.sources.find () View, which indicates the last synchronization time from the node, so that you know when to start synchronizing the next time.

MongoDB Replication (synchronous) implementation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.