The concept of a replica set
What is a copy? Perhaps the first impression is a copy of the game, the game copy is to let each player have a separate game environment, the replication of this environment is a copy of the embodiment. MongoDB also provides support for replicas, with multiple replicas in the replica set to ensure database fault tolerance, even if a copy hangs up or there are many replicas, and supports automatic election between replicas, switching, resolves the problem of manual replication of the master-slave copy mentioned in the previous blog post. In fact, master-slave replication is a single copy, lack of extensibility, fault tolerance.
Second, the principle and diagram of the copy set
The application server, which is the client, connects to MongoDB's entire replica set, a primary server in the replica set is responsible for reading and writing the entire replica sets, and the replica node synchronizes the data and oplog of the master node on a regular basis to ensure data consistency, and once the primary node goes down or hangs, the replica node is detected by the heartbeat mechanism. The primary node is re-elected based on the node priority set at the time of the pre-replica set creation, thus guaranteeing high availability. Thus, the client does not have to care about the health of the replica set at all, and MongoDB clusters can remain highly available for a long time.
The replica set provides the automatic response mechanism of MONGODB cluster failure, which is highly scalable and fault-tolerant, and is a cluster solution that is highly recommended by MongoDB.
Iii. Description of use of the replica set
Take three odd-numbered nodes for example, IP, port, log and data storage path as follows
Node 1
Address: localhost:1001
Log storage path: E:\replset\logs\node1\log1.txt
Data storage path: E:\replset\db\node1
Node 2
Address: localhost:1002
Log storage path: E:\replset\logs\node2\log2.txt
Data storage path: E:\replset\db\node2
Node 3
Address: localhost:1003
Log storage path: E:\replset\logs\node3\log3.txt
Data storage path: E:\REPLSET\DB\NODE3
Start the three node command as follows, where the master node must be established
Start Node 1:
Mongod--dbpath E:\replset\db\node1--logpath E:\replset\logs\node1\log1.txt--logappend--port 1001--replset yukai/ localhost:1002--master
Start Node 2:
Mongod--dbpath E:\replset\db\node2--logpath E:\replset\logs\node2\log2.txt--logappend--port 1002--replset yukai/ localhost:1001
Start Node 3:
Mongod--dbpath E:\replset\db\node3--logpath E:\replset\logs\node3\log3.txt--logappend--port 1003--replset yukai/ localhost:1001,localhost:1002
Start Node Command explanation
DBPath: Data Storage Path
LogPath: Log Storage path
Logappend: Log Stitching Declaration
Replset: Declaration is a replica set, followed by a replica set name/node address with which the replica set is located (replica set requires at least two nodes, one master, one from)
Master: Declares that the node is the primary node
Initialize node (only one time can be initialized)
MONGO Localhost:1001/admin
Db.runcommand ({
"Replsetinitiate": {
"_id": "Yukai",
"Members": [
{
"_id": 1,
"Host": "localhost:1001",
"Priority": 3},
{
"_id": 2,
"Host": "Localhost:1002",
"Priority": 2},
{"_id": 3,
"Host": "Localhost:1003",
"Priority": 1}
]
}
});
Initialize Node command interpretation
Db.runcommand: Executing database instruction functions
Replsetinitiate: node initialization information, JSON
_ID: Replica set name, consistent with the name in the startup node
Host: Replica set member address
Priority: The priority of the election as the new master
Verify
Log in to the master node, which is the highest priority master node, and enter the command for the Master node: db. $cmd. FindOne ({ismaster:1});
Turn the primary node off, login priority second from the node, enter the command for whether the Master node: db. $cmd. FindOne ({ismaster:1}); When the primary node is down, the higher-priority nodes are elected for the master node, which continues to be responsible for reading and writing, providing continuous service and keeping the cluster highly available.
Iv. arbitrators ' nodes and their roles
When the number of nodes in the cluster is an even number, the voting mechanism will determine who will become the master node based on the last operation of the data, update the timestamp, priority, etc., if the above conditions are met, and the votes are equal, this polling session will need to wait a few minutes, which is unacceptable to the client.
The appearance of the quorum node breaks this deadlock, the quorum node does not need too much system resources, does not hold the data itself, but participates in voting and effectively coordinates the tearing force from the node contention master.
When deploying multiple quorum nodes in a cluster, you need to add the attribute arbiteronly:true when initializing the replica set.
MongoDB Replica Set