MongoDB Shard Architecture
The configuration of the MongoDB master-slave architecture and the replica set schema is described earlier, but both configurations have a common feature that only the primary node can read and write, from the node only. If the primary node writes a large pressure, there is still a performance bottleneck.
In MongoDB there is another cluster, that is, the Shard technology, can meet the requirements of a large number of MONGODB data volume growth. When MongoDB stores massive amounts of data, a machine may not be enough to store data, or it may not be sufficient to provide acceptable read and write throughput.
At this point, we can divide the data on multiple machines so that the database system can store and process more data.
Display is the Shard cluster architecture of MongoDB
There are three main components as described below:
Shard: Used to store actual blocks of data, a Shard server role in the actual production environment can be assumed by a few machines, one replica set, to prevent a single point of failure of the host
The Config Server:mongod instance, which stores the entire clustermetadata, including chunk information.
Query routers: Front-end routing, where the client is connected and makes the entire cluster look like a single database, the front-end application can be used transparently.
Shard Cluster Configuration
To create a config replica set:
Note: The mongodb3.4 version begins with the requirement that config sever be a replica set schema, not a single node, to prevent single nodes from failing
Config server configuration, I am here in a single test machine configuration to test, DBPath, LogPath, Port changed to different
Config server must be set
Configsvr = True
and set the Replset replica set name, which says mongodb3.4 start requires that config server is a replica set schema and cannot be a single node
# cat Conf1.conf dbpath=/data/mongo/config1configsvr = Truelogpath=/var/log/mongo/config/conf1.loglogappend = True Fork = Trueport = 27100bind_ip=127.0.0.1replset = conf# Cat conf2.conf dbpath=/data/mongo/config2configsvr = truelogpath= /var/log/mongo/config/conf2.loglogappend = True fork = Trueport = 27101bind_ip=127.0.0.1replset = conf
Starting a config server two instance
Mongod--config Conf1.confmongod--config conf2.conf
Config server replica set initialization
> rs.initiate ({ _id: "conf", Members: [{_id:0, host: "127.0.0.1:27100"},{_id:1, host: "127.0.0.1:27101"}] } ) { "OK" : 1 }conf:primary> rs.status () { "Set" : "conf", "Date" : isodate ("2018-04-20t08:56:14.588z"), " MyState " : 1, " term " : numberlong (1), "Configsvr" : true, "Heartbeatintervalmillis" : numberlong (, ) "Optimes" : { "Lastcommittedoptime" : { "TS" : timestamp (1524214563, 1) , "T" : numberlong (1) }, "Readconcernmajorityoptime" : { "TS" : timestamp (1524214563, 1), "T"  : numberlong (1) }, "AppliedOpTime" : { "TS" : timestamp (1524214563, 1), "T" : numberlong (1) }, "Durableoptime" : { "TS" : timestamp (1524214563, 1), "T" : numberlong (1) } }, "Members" : [ { "_id" : 0, "name" : "127.0.0.1:27100", " Health " : 1, "State" : 1, "Statestr" : " PRIMARY ", "Uptime" : 49, " Optime " : { " TS " : timestamp (1524214563, 1), "T" : numberlong (1) }, "Optimedate" : isodate ("2018-04-20t08:56:03z"), "InfoMessage" : "Could not find member to sync from", " Electiontime " : timestamp (1524214561, 1), "Electiondate" : isodate ("2018-04-20t08:56:01z "), "ConfigVersion" : 1, "Self" : true }, { "_id" : 1, "Name"  : "127.0.0.1:27101", "Health" : 1, "state"  : 2, "Statestr" : "secondary", "Uptime" : 24, "Optime" : { "TS" : timestamp (1524214563, 1), "T" : numberlong (1) }, "Optimedurable" : { "TS" : timestamp (1524214563, 1), "T" : numberlong (1) }, "Optimedate" : isodate ("2018-04-20t08:56:03z"), "Optimedurabledate" : isodate ("2018-04-20t08:56:03z"), "Lastheartbeat" : isodate ("2018-04-20t08:56:13.193z"), "Lastheartbeatrecv" : isodate ("2018-04-20t08:56:13.673z"), "PingMs"  : numberlong (0), "Syncingto" : "127.0.0.1:27100", "ConfigVersion" : 1 } ], "OK" : 1}
MongoDB Shard Schema Configuration