In this blog we mainly discuss the management of the blog. As a detailed example has been written in the previous five articles, this is no longer an example.
First, monitoring
Shard Cluster is a complex piece in the whole system, so it should be monitored more.
Main commands: Serverstatus and Currentop ()
Second, manual partitioning
Manual partitioning means splitting and migrating the blocks of the online shard cluster manually. Generally speaking, the more a shard is written, the larger it is. The Movechunk command is also helpful in this case
Third, add a shard
Sh.addshard ("Computername:port")
Increase capacity in this way, and be aware of the time it takes to migrate data to new shards. The migration speed is expected to be 100~200m per minute. It is best to add new shards at the index and working set to reach an existing plan.
Iv. deleting shards
In some rare cases, you may want to delete a shard, which can be removed by command Removeshard. Once the shards are emptied, you also need to confirm that the shards that will be deleted are not the primary shards of the database, and can be queried by the command config.databases collection;
Db.databases.find ()
V. Assemble to Shard
Although you can delete a shard, there is no formal path to remove the Shard from the collection. This is done by first exporting the data with the Mongodump command and then using Mongorestore to recover the data.
Vi. backing up a shard collection
To back up a shard cluster is the need to configure data and a copy of each shard data, one way to use the Monodump command to export data. Another way is to copy the data file from one member of each shard and then copy the data file from the other server.
In either case, there is no block in the move process to confirm the backup system.
Stop equalizer: use config; Db.setting.update ({_id: "balancer"},{$set: {stopped:true},true})
Re-confirm before backup: Use config, Db.locks.find ({"_id": "Balancer"})
Don't forget to start the equalizer after backup
Sh.setbalancerstate ()
Sh.isbalancerrunning ()
Vii. Failover and Recovery
A shard member fails and is automatically transferred to a member of another replica set. If there is an abnormal performance, you can reset it by restarting the MONGOs
Configure server failure: A shard cluster typically has three configuration servers, and if there are two failures, the remaining configuration servers become read-only and all split and equalization operations stop. This does not affect the read and write of the cluster, when the three configuration server is restored, the equalizer will stop the place where it starts to work again
MONGOs failure: If the MONGOs process fails, the application server is normally restarted;
"MongoDB" in a shard cluster of MongoDB under Windows Platform (vi)