I. Introduction when the database runs slowly, we need to find out the performance bottleneck. The first step to identify the performance bottleneck is to find out what the database is busy with, and then find the slowest operation to optimize. MongoDB provides a database analyzer to collect detailed information such as write operations, cursors, and commands executed on instances. MongoDB database analyzer can be used at the instance level
I. Introduction when the database runs slowly, we need to find out the performance bottleneck. The first step to identify the performance bottleneck is to find out what the database is busy with, and then find the slowest operation to optimize. MongoDB provides a database analyzer to collect detailed information such as write operations, cursors, and commands executed on instances. MongoDB database analyzer can be used at the instance level
I. Introduction
When the database runs slowly, we need to identify the performance bottleneck. The first step to identify the performance bottleneck is to find out what the database is busy with, and then find the slowest operation to optimize. MongoDB provides a database analyzer to collect detailed information such as write operations, cursors, and commands executed on instances. The MongoDB database analyzer can be opened at the instance level or at the database level.
MongoDB analyzer can be set to three levels:
0-the analyzer is disabled and no data is collected.
1-only collect analysis data for slow operations. By default, if an operation takes more than 100 ms, it is considered a slow operation.
2-collect analysis data for all database operations.
Ii. system. profile collection
The data collected by the MongoDB analyzer is stored in the system. profile collection. System. profile is a set of fixed sizes. Data is written to this set cyclically. After all allocated space is used up, MongoDB will overwrite the oldest documents in the set. By default, the size of the system. profile set is 4 MB.
Common fields in the system. profile set include:
Ts: Timestamp of Database Operations
Op: operation type. Possible values include insert, query, update, remove, getmore, and command.
Ns: name of the operation object
Query: query statement
Nreturned: number of returned documents
Nscanned: Number of Documents scanned for the operation
Millis: time spent executing the operation
3. Set the database analyzer level
MongoDB analyzer has three levels, which can be set through mongo Shell:
Db. getProfilingLevel (): Get the current profile level
Db. getProfilingStatus (): gets the current profile level and returns the slowms threshold.
Db. setProfilingLevel (): sets the profile level.
We can try to set the analyzer level to 1 and slowms to 500 ms:
Db. setProfilingLevel (1,500)
{"Was": 0, "slowms": 100, "OK": 1}
In the execution result, the was field shows that the original analyzer level is 0, and the slowms field shows that the original slow operation threshold is 100 ms. Query db. getProfilingStatus () for new analyzer settings:
Db. getProfilingStatus ()
{& Quot; was & quot;: 1, & quot; slowms & quot;: 500 & quot}
The execution result shows that the analyzer level has been set to 1, indicating that mongodb will record all operations that have been executed for more than Ms.
4. View and analyze analyzer data
First, we prepare the initial data, create a mydb database, create a test set in it, and insert 10 million pieces of data:
Use mydb
Db. test. drop ()
For (var I = 1; I <= 10000000; I ++) db. test. insert ({empno: I, name: I, address: 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa '})
Clear the system. profile set, set the analysis level to 2, and the threshold for slow operations to 100 ms:
Use mydb
Db. setProfilingLevel (0)
Db. system. profile. drop ()
Db. setProfilingLevel (2,100)
Query documents whose empno is greater than 99999999 without an index:
Db. test. find ({empno: {$ gt: 9999999 }})
A result is queried, so the network:
{"_ Id": ObjectId ("523ad4d1_fdfe550ae05d97"), "empno": 10000000, "name": 10000000, "address": "AAAAAAAAAAAAAAAAAAAAAAAAAAA "}
Create an index in empno:
Db. test. ensureIndex ({empno: 1 })
Re-query the documents whose empno is greater than 99999999 and get the same results.
Query the analysis data stored in system. profile, sorted by Time:
Db. system. profile. find (). limit (10). sort ({ts: 1}). pretty ()
Expected result:
{
"Op": "query ",
"Ns": "mydb. system. indexes ",
"Query ":{
"ExpireAfterSeconds ":{
"$ Exists": true
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 1,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (44 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (2 ),
& Quot; w & quot;: NumberLong (57806)
}
},
"Nreturned": 0,
"ResponseLength": 20,
"Millis": 0,
"Ts": ISODate ("2013-09-19T10: 47: 00.656Z "),
"Client": "0.0.0.0 ",
"AllUsers ":[
{
"User": "_ system ",
"UserSource": "local"
}
],
"User": "_ system @ local"
}
{
"Op": "query ",
"Ns": "mydb. test ",
"Query ":{
"Empno ":{
"$ Gt": 9999999
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 10000000,
"KeyUpdates": 0,
"NumYield": 29,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (59258536 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (29469480 ),
"W": NumberLong (6)
}
},
"Nreturned": 1,
"ResponseLength": 112,
"Millennium": 29861,
"Ts": ISODate ("2013-09-19T10: 47: 21.476Z "),
"Client": "127.0.0.1 ",
"AllUsers": [],
"User ":""
}
{
"Op": "query ",
"Ns": "mydb. system. indexes ",
"Query ":{
"ExpireAfterSeconds ":{
"$ Exists": true
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 1,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (171 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (8 ),
"W": NumberLong (4)
}
},
"Nreturned": 0,
"ResponseLength": 20,
"Millis": 0,
"Ts": ISODate ("2013-09-19T10: 48: 00.661Z "),
"Client": "0.0.0.0 ",
"AllUsers ":[
{
"User": "_ system ",
"UserSource": "local"
}
],
"User": "_ system @ local"
}
{
"Op": "query ",
"Ns": "mydb. system. indexes ",
"Query ":{
"ExpireAfterSeconds ":{
"$ Exists": true
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 1,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (41 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (3 ),
"W": NumberLong (2)
}
},
"Nreturned": 0,
"ResponseLength": 20,
"Millis": 0,
"Ts": ISODate ("2013-09-19T10: 49: 00.677Z "),
"Client": "0.0.0.0 ",
"AllUsers ":[
{
"User": "_ system ",
"UserSource": "local"
}
],
"User": "_ system @ local"
}
{
"Op": "insert ",
"Ns": "mydb. system. indexes ",
"Query ":{
"V": 1,
"Key ":{
"Empno": 1
},
"Ns": "mydb. test ",
"Name": "empno_1"
},
"Ninserted": 1,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (0 ),
& Quot; w & quot;: NumberLong (145940925)
},
"TimeAcquiringMicros ":{
"R": NumberLong (0 ),
& Quot; w & quot;: NumberLong (292871)
}
},
"Millennium": 145977,
"Ts": ISODate ("2013-09-19T10: 51: 37.246Z "),
"Client": "127.0.0.1 ",
"AllUsers": [],
"User ":""
}
{
"Op": "query ",
"Ns": "mydb. system. indexes ",
"Query ":{
"ExpireAfterSeconds ":{
"$ Exists": true
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 2,
"KeyUpdates": 0,
"NumYield": 1,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (598801 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (96316198 ),
"W": NumberLong (13)
}
},
"Nreturned": 0,
"ResponseLength": 20,
"Millennium": 1107,
"Ts": ISODate ("2013-09-19T10: 51: 38.122Z "),
"Client": "0.0.0.0 ",
"AllUsers ":[
{
"User": "_ system ",
"UserSource": "local"
}
],
"User": "_ system @ local"
}
{
"Op": "query ",
"Ns": "mydb. system. indexes ",
"Query ":{
"ExpireAfterSeconds ":{
"$ Exists": true
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 2,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (54 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (3 ),
"W": NumberLong (2)
}
},
"Nreturned": 0,
"ResponseLength": 20,
"Millis": 0,
"Ts": ISODate ("2013-09-19T10: 52: 38.260Z "),
"Client": "0.0.0.0 ",
"AllUsers ":[
{
"User": "_ system ",
"UserSource": "local"
}
],
"User": "_ system @ local"
}
{
"Op": "query ",
"Ns": "mydb. test ",
"Query ":{
"Empno ":{
"$ Gt": 9999999
}
},
"Ntoreturn": 0,
"Ntoskip": 0,
"Nscanned": 1,
"KeyUpdates": 0,
"NumYield": 0,
"LockStats ":{
"TimeLockedMicros ":{
"R": NumberLong (125810 ),
"W": NumberLong (0)
},
"TimeAcquiringMicros ":{
"R": NumberLong (4 ),
"W": NumberLong (6)
}
},
"Nreturned": 1,
"ResponseLength": 112,
"Millennium": 125,
"Ts": ISODate ("2013-09-19T10: 52: 40.258Z "),
"Client": "127.0.0.1 ",
"AllUsers": [],
"User ":""
}
We can see that the same query and one index creation operation are both performed.
The time stamp for the first query is 2013-09-19T10: 47: 21.476Z, nscanned is 10 million, nreturned is 1, and the operation takes 29861 ms.
The preceding query is executed without an index. To optimize the query, we have created an index on empno. From the system. profile query results, we can see that the op of the created index is insert, the operation name is mydb. system. indexes, and the created index name is empno_1. It takes 145977 ms to create an index.
The timestamp of the second query is 2013-09-19T10: 52: 40.258Z, nscanned is 1, nreturned is 1, and the operation takes 125 ms.
From the output of system. profile, we can see that the queries with indexes are much faster than those without indexes.