About Replica Sets
A replica set is a process that synchronizes data across multiple machines.
Replica collectives provide data redundancy and extend data availability. Saving data on multiple servers avoids data loss due to a single server.
It can also be freed from hardware failures or service outages, with additional copies of data that can be dedicated to disaster recovery or backup from a single machine.
In some scenarios, you can use a replica set to extend read performance. The client has the ability to send read and write operations to different servers.
The ability to expand distributed applications can also be obtained from different data centers by acquiring different copies.
The MongoDB replica set is a set of MongoDB instances with the same data, and the primary MongoDB accepts all the writes, and all other instances can accept the action of the main instance to keep the data synchronized.
The primary instance accepts client-writable writes, and the replica set can have only one main instance because, in order to maintain data consistency, only one instance can be written and the main instance's log is kept in Oplog.
Client application Driver
writes reads
| |
Primary
| replication| Replication
Secondary Secondary
The level two node replicates the oplog of the master node and performs operations on its own copy of the data, and the level two node is the reflection of the master data, and if the primary node is unavailable, a new master node is elected. The default read operation is done on the primary node, but you can specify a read preference parameter to specify the read operation to the replica node.
You can add an additional quorum node (not being eligible for election), keep the replica set node odd, and make sure that you can elect a different direct point for the number of votes. The arbitrator does not require a dedicated hardware device.
The Arbiter node always holds the arbitrator's identity.
1. Asynchronous replication
the replica node synchronization direct point operation is asynchronous, but causes the replica set to not return the most recent data to the client program.
2. Automatic failover
if more than 10s of the primary node loses communication with other nodes, the other node will elect the new node as the primary node.
The secondary node with most votes will be elected as the primary node.
The replica set provides options for the application to make a replica set of members in different data centers.
You can also specify different priorities for the members to control the election.
Sharding transforms a replica set into a fragmented cluster
1. Deploy a test replica set
Create the first replica set instance with the name Firstset:
1.1 Create the replica set and insert the data as follows:
/data/example/firstset1
/data/example/firstset2
/data/example/firstset3
To create a directory:
Mkdir-p/data/example/firstset1/data/example/firstset2/data/example/firstset3
1.2 Start three MongoDB instances at other terminals, as follows:
--dbpath/data/example/firstset1--port 10001--replset firstset--oplogsize mongod--rest--fork--logpath/data/exam Ple/firstset1/firstset1.log--logappend--nojournal--directoryperdb
mongod--dbpath/data/example/firstset2-- Port 10002--replset firstset--oplogsize--rest--fork--logpath/data/example/firstset2/firstset2.log- -nojournal--directoryperdb
mongod--dbpath/data/example/firstset3--port 10003--replset firstset--oplogSize 700 --rest--fork--logpath/data/example/firstset3/firstset3.log--logappend--nojournal--directoryperdb
The--oplog option forces each MongoDB instance action log to be 700M, and not using this parameter defaults to 5% of the partition space, limiting the size of the oplog, allowing each instance to start faster.
1.3 Shell that connects a MongoDB instance
MONGO Mongo01:10001/admin
If you are running in a production environment, or a different host name or IP machine, you need to modify MONGO01 to the specified name.
1.4 Initializing the replica set on the MONGO shell
var config = {
"_id": "Firstset", "Members
": [
{"_id": 0, "host": "Mongo01:10001"},
{"_id": 1, "host" : "Mongo01:10002"},
{"_id": 2, "host": "Mongo01:10003"},
]
}
rs.initiate (config);
{
"info": "Config now saved locally." Should come online in about a minute. ",
" OK ": 1
}
Or
Db.runcommand (
{"Replsetinitiate":
{"_id": "Firstset",
"members": [
{"_id": 0, "host": "Mongo01 : 10001 "},
{" _id ": 1," host ":" Mongo01:10002 "},
{" _id ": 2," host ":" Mongo01:10003 "}
]
}
}< c24/>)
1.5 Create and insert data in the MONGO shell:
Use mydb
switched to db mydb
animal = ["Dog", "Tiger", "Cat", "lion", "Elephant", "bird", "horse", "pig", "Rabbit" , "Cow", "dragon", "snake"];
for (var i=0 i<100000; i++) {
name = Animal[math.floor (Math.random () *animal.length)];
user_id = i;
Boolean = [True, False][math.floor (Math.random () *2)];
Added_at = new Date ();
Number = Math.floor (Math.random () *10001);
Db.test_collection.save ({"Name": Name, "user_id": user_id, "Boolean": Boolean, "Added_at": Added_at, "number": number}) ;
}
The above operation inserts 1 million pieces of data into the collection test_collection, which can take several minutes depending on the system.
The script will include documents in the following format:
2. Deployment of a fragmentation facility
create three configuration servers to hold the metadata of the cluster.
For a development or test environment, a configuration server is sufficient, and in a production environment it takes three days to configure the server because they only require a small amount of resources to hold the metadata.
2.1 Create a configuration server data file Save directory:
/DATA/EXAMPLE/CONFIG1
/data/example/config2
/data/example/config3
To create a directory:
Mkdir-p/DATA/EXAMPLE/CONFIG1/DATA/EXAMPLE/CONFIG2/DATA/EXAMPLE/CONFIG3
2.2 Under another terminal, start the configuration server :
Mongod--configsvr--dbpath/data/example/config1--port 20001--fork-- Logappend
mongod--configsvr--dbpath/data/example/config2--port 20002--fork--logpath/data/example/config2/ Config2.log--logappend
mongod--configsvr--dbpath/data/example/config3--port 20003--fork--logpath/data/ Example/config3/config3.log--logappend
2.3 Under another terminal, start the MONGOs instance:
mongos --configdb mongo01:20001,mongo01:20002,mongo01:20003 --port 27017 --chunkSize 1 --fork --logpath /data/example/mongos.log --logappend
If you are using a previously created table or a test environment, you can use the smallest chunksize (1M), and the default chunksize 64M means that the cluster must have 64MB of data files before MongoDB automatic fragmentation starts.
It is not possible to use a small fragment size in a production environment.
The CONFIGDB option specifies the configuration server. The MONGOs instance runs on the default mongodb27017 port.
2.4 You can add the first fragment to the MONGOs and execute the following command at the new terminal:
2.4.1 Connection MONGOs Instance
MONGO Mongo01:27017/admin
2.4.2 Use the Addshard command to add the first fragment
Db.runcommand ({addshard: "firstset/mongo01:10001,mongo01:10002,mongo01:10003"})
2.4.3 appears with the following information to indicate success:
{"shardadded": "Firstset", "OK": 1}
3. Deploy another test replica set
Create another replica set instance with the name Secondset:
3.1 Create the replica set and insert the data as follows:
/data/example/secondset1
/data/example/secondset2
/data/example/secondset3
To create a directory:
Mkdir-p/data/example/secondset1/data/example/secondset2/data/example/secondset3
3.2 Start three MongoDB instances at other terminals, as follows:
--dbpath/data/example/secondset1--port 30001--replset secondset--oplogsize mongod--rest--fork--logpath/data/ex Ample/secondset1/secondset1.log--logappend--nojournal--directoryperdb
mongod--dbpath/data/example/ Secondset2--port 30002--replset secondset--oplogsize---rest--fork--logpath/data/example/secondset2/ Secondset2.log--logappend--nojournal--directoryperdb
mongod--dbpath/data/example/secondset3--port 30003-- Replset secondset--oplogsize--rest--fork--logpath/data/example/secondset3/secondset3.log-- Nojournal--directoryperdb
3.3 Shell that connects a MongoDB instance
MONGO Mongo01:20001/admin
3.4 Initializing the replica set on the MONGO shell
Db.runcommand (
{"Replsetinitiate":
{"_id": "Secondset",
"members": [
{"_id": 0, "host": "Mongo01 : 30001 "},
{" _id ": 1," host ":" Mongo01:30002 "},
{" _id ": 2," host ":" Mongo01:30003 "}
]
}
)
3.5 Adding the replica set to the fragmented cluster
Db.runcommand ({addshard: "secondset/mongo01:30001,mongo01:30002,mongo01:30003"})
Return success Information:
{"shardadded": "Firstset", "OK": 1}
3.6 by running the listshards command to confirm that the fragment has been added successfully. As follows:
Db.runcommand ({listshards:1})
{
"shards": [
{
"_id": "Firstset",
"host": "Firstset/mongo01 : 10001,mongo01:10002,mongo01:10003 "
},
{
" _id ":" Secondset ",
" host ":" Secondset/mongo01:30001, Mongo01:30002,mongo01:30003 "
}
],
" OK ": 1
}