Soon a project on line, suddenly reflect the speed of a lot slower.
Test Insert Data:
> for (i = 1; I <= 100000; i++) {...
... Db.test_customer.insert ({
... "_id": I,
... "USER_ID": I
(30 fields)
})
The test insert results are as follows:
> Db.test_customer.find ({},{_id:1,create_dt:1}). Sort ({_id:-1}). Limit (2)
{"_id": 11176, "Create_dt": Isodate ("2014-07-22t03:10:10.435z")}
{"_id": 11175, "Create_dt": Isodate ("2014-07-22t03:10:10.420z")}
> Db.test_customer.find ({},{_id:1,create_dt:1}). Sort ({_id:1}). Limit (2)
{"_id": 1, "Create_dt": Isodate ("2014-07-22t03:01:23.935z")}
{"_id": 2, "Create_dt": Isodate ("2014-07-22t03:01:24.187z")}
>
See the insertion time lag, more than 10,000 records. It took 10 minutes.
How can it be so slow.
Use Mongotop to view, and access is centralized in the reading and writing of a document.
When you use another Mongod process to test the insertion of data and find the same code, inserting the data is normal. 10,000 data is just a few seconds.
Use this method to troubleshoot problems with server hardware and server configuration.
Mongod-port [Otherport] Dbpath=/otherpath/otherdb
Analysis of the suspect may be the bottleneck of database file read and write data.
Decide to divide the document "table", which reads and writes particularly much, into a separate database.
When finished, test the insert again.
Then look at the inserted data:
> Db.test_customer.find ({},{create_dt:1}). Sort ({_id:1}). Limit (2)
{"_id": 1, "Create_dt": Isodate ("2014-07-22t10:06:51.502z")}
{"_id": 2, "Create_dt": Isodate ("2014-07-22t10:06:51.509z")}
> Db.test_customer.find ({},{create_dt:1}). Sort ({_id:-1}). Limit (2)
{"_id": 10000, "Create_dt": Isodate ("2014-07-22t10:06:58.016z")}
{"_id": 9999, "Create_dt": Isodate ("2014-07-22t10:06:58.015z")}
>
You can see it took only 7 seconds, much better than before.
There seems to be a bottleneck in file reading and writing. It is best to divide data into multiple databases.