MongoDB profiler is a good tool for analyzing slow queries when the database is running. By default, slow query means that the query time is more than Ms. This value can be modified.
Let's take a look at how to use it.
Step 1 log on to the primary server
Assume that there is a MongoDB Replica-set cluster, and log on to the primary server with mongo shell.
Step 2 set profile level
Db. setProfilingLevel (1)
The MongoDB server runs slowly and writes the slow query statements to the db. system. profile collection. The following shows how to find the latest slow query.
Rs1: PRIMARY> db. system. profile. find (). limit (1 ). sort ({ts:-1 }). pretty () {"op": "query", "ns": "kaimei. digital_message "," query ": {" query ": {" display_id ": {" $ in ": [ObjectId (" 52312efca9bb51d66fa724a8 "), ObjectId (" success "), objectId ("52312efca9bb51d66fa724ac"), ObjectId ("52312efca9bb51d66fa724ae"), ObjectId ("entity"), ObjectId ("entity "),...... objectId ("updated"), ObjectId ("527710eb33f6792263ddba44")]}, "status": {"$ nin": ["success", "deprecated"]}, "orderby": {"_ id": 1 }}, "ntoreturn": 0, "ntoskip": 0, "nscanned": 210342, "keyUpdates": 0, "numYield": 454, "lockStats": {"timeLockedMicros": {"r": NumberLong (1755835), "w": NumberLong (0)}, "timeAcquiringMicros ": {"r": NumberLong (1145626), "w": NumberLong (1599) }," nreturned ": 0," responseLength ": 20," millis ": 1147, "ts": ISODate ("2013-12-10T13: 08: 05.839Z"), "client": "192.168.1.58", "allUsers": [], "user ": ""} step 4 Create an index to optimize the query
Now let's see how to optimize it. Through getIndexes, we can find that display_id and status are not indexed. Therefore, create an index
rs1:PRIMARY> db.digital_message.ensureIndex({display_id: 1, status: 1}, {background: true} )rs1:PRIMARY> db.digital_message.getIndexes()[ { "v" : 1, "key" : { "_id" : 1 }, "ns" : "kaimei.digital_message", "name" : "_id_" }, { "v" : 1, "key" : { "display_id" : 1, "status" : 1 }, "ns" : "kaimei.digital_message", "name" : "display_id_1_status_1", "background" : true }]
Background: true is very important to ensure that the database can still process requests during index creation. Here, I create a composite index for dual fields.
Now try again:
] }, "status" : { "$nin" : [ "success", "deprecated" ] } }, "updateobj" : { "$set" : { "status" : "sending" } }, "nscanned" : 37693, "nupdated" : 0, "keyUpdates" : 0, "numYield" : 3, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(248113) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(114619) } }, "millis" : 149, "ts" : ISODate("2013-12-10T13:30:46.400Z"), "client" : "192.168.1.55", "allUsers" : [ ], "user" : ""}
We can see that the speed has improved a lot. Because my in query involves at least 1500 rows of records, the 149 ms query time is acceptable.
When you open the profile later, you can modify the slow query measurement standard. For example, if you change it to a value greater than 200 ms, the slow query is considered as a slow query.
Db. setProfilingLevel (1,200)
Step 5 clear the profile set and restore it to the beginning.
db.setProfilingLevel(0)
Db. system. profile. drop () db. createCollection ("system. profile ")