A year ago set up a MongoDB cluster, run pretty good, but a few times have encountered the problem of service card death. The process has been handy, to share with you:
Failure phenomena:
The business query is slow and there is a connection exception:
{"serverused": "/10.6.19.80:10013", "errmsg": "Exception:could not run map command on all shards for NS Tloc.fileprop S and query {author: {$in: [\ ' exception\ ']}, type: {$in: [0, 1]}}:: Caused by:: Socket exception [Connect_erro R] for shard2/10.6.19.91:10016 "," code ": 11002," OK ": 0.0}
{"serverused": "/10.6.19.108:10013", "OK": 0.0, "errmsg": "MR post processing failed: {errmsg: \" Exception:could n OT initialize cursor across all shards because:socket exception [Send_error] for 10.6.19.91:10016 @ shard2/10.6.19.91:10 016\ ", code:14827, ok:0.0}"}
At that time, each MONGO shard, routing, configuration server process is running, and view the routing service IO is not high, memory, CPU is also acceptable. However, the business query is stuck, causing the service to be unavailable.
Cause of failure:
Can be MONGO on the local connection, cut to the business db, through "db.currentop ()" To see the operation performed, found that the operand has begun to accumulate, blocking state. It is also possible to observe that the common operation accumulates under the same shard, and it is estimated that there is a problem with this shard, there are several possibilities:
1. Disk IO exception
2, the task parameters are unreasonable, the query is really slow
In summary, it is not possible to make the entire cluster unusable because of a shard problem.
Failure recovery:
If it is on-line availability, it is usually very urgent, now know the reason, should be restored immediately. There are two ways to do this:
1, one by one with Db.killop ("Opid") to kill an operation (MONGO no group kill, even if you restart the route, those operations are still in the configuration server), but this is not reasonable, because its growth is blocked quickly, and it is likely that you can not even MONGO, The whole service was paralyzed;
2, violent restart of the Shard, this is what I am currently using, but also a relatively fast and effective method
Restart the service specifically, not all servers, just restart the blocked shards:
1. Confirm suspected shards through Db.currentop () or shard Mongd logs
2, directly on the Shard machine, kill off the Mongod process
3. Start the Mongod process again
4. Enter each routing server, execute Db.shutdownserver () in turn, and then start the MONGOs process
At this point, the blocking operations in the application should be gone, and you can confirm that the cluster is available by executing db.xxx.find () on the routing service.
Reprint Please specify original site:http://www.cnblogs.com/lekko/p/5653940.html
MongoDB Cluster Card Dead problem