Two problems encountered in today's MONGODB operations
The overall problem is that the MongoDB cluster cannot enter an exception that is:
One is improper shutdown mongod data corruption, MongoDB will automatically lock (generate Mongod.lock file), prevent continued corruption. This is typically done to prevent users from continuing to write data and destroying the original data store. In the second case, the user needs to delete the Mongod.lock file under mongdb/data/, and then execute the./mongod--repair Perform the repair, if you can enter the best, if you can not enter, please carefully review the Mongos.log log.
The problem I have here is that MongoDB's journal space is insufficient. Because my environment is the virtual machine did not allocate enough space, just journal on 3 G, resulting in Journal no space to write. Under normal circumstances MongoDB has a replica set, the default generation size of the journal file is maximum 1G, and then when the file is full, it is regenerated into a 1G file. You can also set journal nothing for Smallfiles=true, so journal files can be reduced to 128M. 1G and 128M files do not know what the difference, I think it should be to reduce the storage of the log, mongdb damage can be changed again when the repair may be different, the recovery level (degree) should be different, but the basic function after recovery is still available. Or if the test environment does not need to be restored, you can turn off journal. About the journal in different bit Linux open and close different, the simple description is this,
Linu Number of digits |
Journal Open |
Journal off |
32 Guests |
Journal=true |
Journal=false |
64 guests |
Journal=true |
Nojournal=false |
For more MongoDB configuration instructions see: http://www.it165.net/database/html/201402/5303.html
In the test environment, Journal caused by the lack of space, you can delete journal under the file/prealloc.1 and so on, if in the production environment users need to expand disk space, to solve the problem.
Another exception is: After the problem is resolved or unable to enter the MONGDB,
After a closer look at the Mongs.log file, you find an extremely difficult problem to discover:
[Mongosmain] warning:couldn ' t check dbhash on config server 192.168.178.128:20000:: Caused by:: 11002 Socket exception [Connect_error] Serv
er [192.168.178.128:20000] mongos connectionpool error:couldn ' t connect to server 192.168.178.128:20000
Very clear is 192.168.178.128:20000 Config service out of the problem, but config start normal Ah, console output is normal, but mongos.log like on the log problem, toss for a long time, finally found the problem. The original is the result of their own mistakes.
At the time of building mongdb IP is 192.168.178.128, now the IP has changed, so need to modify config configuration in mongos.cnf, reconfigure to the existing IP192.168.16.128.
Then you can enter the MOGODB, but the problem arises:
When you execute show db, the normal information is not displayed, the Shard issue is reported, and I have the following db.shards.find () Error:
Mongos> Db.shards.find ()
Error: {
"$err": "Socket exception [Connect_error] for shard1/ 192.168.178.128:27017,192.168.178.129:27017,192.168.178.130:27017 ",
"Code": 11002,
"Shard": "S1"
}
The original shard IP information is configured in the MONGO command, so you need to modify the Shard IP.
Specific changes in the way see: http://www.it165.net/database/html/201309/4520.html
But I did not install this method, because my db.shards.find () can not query out the original shard allocation details, so directly to the server IP and changed back.
At this point my mongodb finally vitality Huanran, really only when the problem arises can realize mongdb profound, the world of all things the same truth, but the contradiction comes, only to do to do in-depth experience, can be more profound understanding of it, more skilled to deal with it.