MongoDB supports failover and implementation redundancy across multiple machines through asynchronous replication. Only one machine is used for write operations at the same time in multiple machines, and because of this, MongoDB provides the assurance of data consistency. The machine that plays the role of primary can distribute the read operation to Slave.
MongoDB high availability is divided into two types:
Master-slave master-slave replication
You only need to add the –master parameter when a service is started, and another service with the –slave and –source parameters to achieve synchronization. This scenario is no longer recommended for the latest version of MongoDB.
Replica sets Replica set
MONODB has developed a new feature in the 1.6 version replica sets, which is more powerful than the previous replication features, increased automatic failover and automatic repair of member nodes, the data between each db is exactly the same, greatly reducing maintenance costs. Auto Shard has made it clear that replication Paris is not supported, replica set is recommended.
The structure of the Replica sets is very similar to a cluster. Yes, you can think of it as a cluster because it implements the same functionality as the cluster, where one node fails, and the other node immediately takes over the business without any downtime.
Type of node:
Standard: A regular node that stores a complete copy of the data, participates in an election poll, and is likely to become a primary node.
Passive: Stores a complete copy of the data, participates in voting, and cannot be a primary node.
Arbiter: The quorum node, which only participates in voting, does not receive replicated data, and cannot be a primary node.
a Repica sets node is preferably an odd number (odd).
Configuration instance (three node)
Two standard nodes (these two nodes can be mutually tangent primary secondary directly).
A arbiter node, holding a ballot in its hand, determines which of the two standard nodes above can become primay.
Legend:
Configuration steps:
1. Start three nodes
Start the first standard node (ip:192.168.0.11)
/mongodb/bin/mongod--dbpath=/mongodb/mongodb_date--logpath=/mongodb/mongodb_log/mongod.log--port 27017--replSet test/192.168.0.12:27017--maxconns=2000 --logappend
start the Second standard node (ip:192.168.0.12)
/mongodb/bin/mongod--dbpath=/mongodb/mongodb_date--logpath=/mongodb/mongodb_log/mongod.log--port 27017--replSet test/192.168.0.11:27017--maxconns=2000 --logappend
Start the Arbiter node, which is the quorum node (ip:192.168.0.13). Attention! --replset test/is written behind two standard node IP and port
/mongodb/bin/mongod--dbpath=/mongodb/mongodb_date--logpath=/mongodb/mongodb_log/mongod.log--port 27017--replSet test/192.168.0.11:27017,192.168.0.12:27017 --logappend
Parameter description:
--dbpath Data File path
--logpath log file path
--port port number, default is 27017. This is the port number I'm using here.
--replset the name of the replica set, the parameter for each node in a replica sets is given a copy set name, here is test.
--replset test/This is followed by the IP and port of the other standard node
--maxconns Maximum number of connections
--fork Background Run
--logappend log files are recycled, and if the log file is full, the new log is overwritten with the longest log.
2, a very critical step, configured above, the following start to initialize each node. On the second initiated node, run the MONGO
Db.runcommand ({"Replsetinitiate" : { "_id": "Test", "members" : [ { "_id ": 0, " host ":" 192.168.0.11:27017 " }, { " _id ": 1, " host ":" 192.168.0.12:27017 " } ]}})
View Code
3. Join the Arbiter node
Primary> Rs.addarb ("192.16.0.13:27017");
Attention!!!
Before adding a new node, be sure to configure the firewall to open the corresponding IP and port.
To add a normal data node:
Primary> Rs.add ("Ip:port")
Delete a node
Primary> rs.remove ("Ip:port")
Show who is currently primary
Primary> Rs.ismaster ()
Modify a normal data node to be a passive node
In addition to the quorum node, each of the other nodes has a priority, we can set the priority to determine who becomes the Primay the most weight. In MongoDB replica sets, the priority is determined by setting the value of precedence, the range of which is 0--100, and the higher the value, the greater the priority. If the value is 0, it cannot be a primay.
1. View the list of nodes through the rs.conf () command
primary> rs.conf () { "_id": "Test" version: 22 "members" : [{ "_id": 3 "host": "192.168.22.36:27017" "_id": 5" host ":" 192.168.22.10:27017 " "_id": 6" host ":" 192.168.22.12:27017 ", "priority": 0" arbiteronly ": true "_id": 7, "host": "192.1 68.22.115:27017 "}]}
View Code
2. Change the priority value of the red 192.168.22.10 node above to 0 so that it only receives data and does not participate in the competition to become a primary.
The command format is as follows:
CFG = rs.conf () cfg.members[0].priority = 0.5 cfg.members[1].priority = 2 cfg.members[2]. Priority = 2 rs.reconfig (CFG)
The numbers in red brackets are the order of the nodes in the execution of rs.conf (), where the first node writes 0, the second node writes 1, and so on.
Execute command
CFG = rs.conf () cfg.members[1].priority = 0 rs.reconfig (CFG)
3. View results, 192.168.22.10 priority changed to 0.
primary> rs.conf () { "_id": "Test" version: 22 "members" : [{ "_id": 3 "host": "192.168.22.36:27017" "_id": 5" host ":" 192.168.22.10:27017 "" priority ": 0 "_id": 6" host ":" 192.168.22.12:27017 ", "priority": 0" arbiteronly ": true "_id": 7, "host": "192.1 68.22.115:27017 "}]}
View Code
MongoDB Finishing Note のreplica sets