In the last two blogs, we have elaborated the working principle and the construction process of the Shard cluster respectively. In this blog we mainly analyze the test results of the Shard cluster;
First look at the various states of the Shard cluster, it can be seen that replication sets a and B are normal:
First, open the Shard collection
Opens a shard on a database, which is a prerequisite for sharding of any collection. Suppose the test database is slidetest.
Shard Collection Definition Note: It looks somewhat similar to the index definition, especially with that unique key. When sharding an empty collection,MongoDB creates an index on each shard that corresponds to the Shard. You can directly connect the shards and run getindexs () to verify. You can log in to view.
Second, write to the Shard cluster
Once the Shard of the collection is complete, the Shard cluster is ready. You can now write data to the cluster, and the data will be distributed across the shards.
initialize the data on 4000 :
for (var i = 0; i < i++) {Db.spreadsheets.insert ({"filename": "sheet-" +i, "updatedate": New Date (), "username": " Albertshao "," Data ":" ABCDE "*1000})}
View results:
Next we examine what happened to the whole block, we can see that there are two of them, and their maximum value is different.
Note: As a boundary of type Bson, $minKey and $maxkey are often used in comparison operations. $minkey always drizzle all bson types, while $maxkey is always greater than all Bson types: MongoDB uses these two types to compare the endpoint of a block
Continue inserting the data, assuming that you insert 200,000 records,
mongos> sh.status ()---sharding status---sharding version: {"_id": 1, "Version": 4, "Mincomp Atibleversion ": 4," CurrentVersion ": 5," Clusterid ": ObjectId (" 545D9AF5340AEC0C2272AFDA ")} Shards: {"_id": "Shard-a", "host": "Shard-a/win--20141018ko:3000,win--20141018ko:3001"} {"_id": "Shard-b", "host": "Shard-b/win--20141018ko:30100,win--20141018ko:30101"} databases: {"_id": "ADM In "," partitioned ": false," PRIMARY ":" config "} {" _id ":" Slidetest "," partitioned ": true," PRIMARY ":" s Hard-a "} Slidetest.spreadsheets Shard key: {" username ": 1," _id ": 1} Chunks:shard-b 1 shard-a 2 {"username": {"$minKey": 1}, "_id": {"$minKey": 1}}-->> {"username": "Albertshao", "_id": ObjectId ("545df80537216b1577de0251")} On:shard-b TimestaMP (2, 0) {"username": "Albertshao", "_id": ObjectId ("545df80537216b1577de0251")}-->> {" Username ":" Albertshao "," _id ": ObjectId (" 545e062437216b1577de1802 ")} on:shard-a Timestamp (2, 2) {"username": "Albertshao", "_id": ObjectId ("545e062437216b1577de1802")}-->> {"username": {"$maxKey": 1} , "_id": {"$maxKey": 1}} on:shard-a Timestamp (2, 3) mongos>
Through the above can be seen, is the use of Usename as a block partition key. At the same time can also see that there are always 3 pieces, of which there are two shard-a, shard-b have a piece.
We can see the migration of the database through Changelog:
Mongos> Db.changelog.count ({What: "Split"}) is 2mongos> Db.changelog.count ({What: "Movechunk.commit"}). Count () 2014-11-08t20:12:09.618+0800 Typeerror:object 1 have no method ' count ' mongos> db.changelog.find ({what: " Movechunk.commit "}). Count () 1mongos> Db.changelog.find ({What:" Movechunk.commit "}) {" _id ":" WIN--20141018KO-2014-11-08T11:01:40-545DF8141603DFC967D0FDCD "," Server ":" Win--20141018ko "," clientAddr ":" 127.0.0.1:50644 "," Time ": Isodate (" 2014-11-08t11:01:40.826z ")," what ":" Movechunk.commit "," ns ":" Slidetest.spreadsheets "," Details ": {" min ": {" username ": {" $minKey ": 1}," _id ": {" $minKey ": 1}}," Max ": {" U Sername ":" Albertshao "," _id ": ObjectId (" 545df80537216b1577de0251 ")}," from ":" Shard-a "," to ":" Shard-b "," cloned ": Numberlong (0), "clonedbytes": Numberlong (0), "catchup": Numberlong (0), "steady": Numberlong (0)}}mongos>
"MongoDB" in the Windows Platform MongoDB Shard cluster (iii)