1 Summary
In MySQL, the slow query log is often used as the basis for our query optimization, is there a similar function in MongoDB? The answer is yes, that is to turn on the profiling function. The tool collects writes about MONGODB, cursors, database commands, etc. on a running instance and can be opened at the database level or at the instance level. The tool writes all the collected to the System.profile collection, which is a capped collection. For more information see: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
2 Instructions for use 2.1 Profiling level description
0: Close, do not collect any data. 1: Collect slow query data, default is 100 milliseconds. 2: Collect all data
2.2 Turn on profiling and settings
1: Through MONGO Shell:
#查看状态: Level and Time primary> Db.getprofilingstatus () {"was": 1, "SLOWMS": $ #查看级别PRIMARY > Db.getprofilinglevel () # Set level primary> Db.setprofilinglevel (2) {"was": 1, "slowms": +, "OK": 1} #设置级别和时间PRIMARY > Db.setprofilinglevel ( XX) {"was": 2, "slowms": +, "OK": 1}
Attention:
More than 1 to operate if the test set below, only the operation of the collection is valid, if necessary for the entire instance to be valid, you need to set in all sets or open the parameters when opening
2 The result returned to you after each set is the state (including the level, time parameter) before the change.
2: Not through MONGO Shell:
At the time of MongoDB startup
Mongod--profile=1--slowms=200
or add 2 lines to the configuration file:
Profile = 1SLOWMS = 200
3: Close Profiling
# Close primary> db.setprofilinglevel (0) {"was": 1, "SLOWMS": $, "OK": 1}
4: Modify the size of the slow query log
#关闭ProfilingPRIMARY > Db.setprofilinglevel (0) {"was": 0, "slowms": $, "OK": 1} #删除system. Profile collection Primary> Db.sy Stem.profile.drop () true# creates a new System.profile collection---4mprimary> db.createcollection ("System.profile", {capped: True, size:4000000}) {"OK": 1} #重新开启ProfilingPRIMARY > Db.setprofilinglevel (1) {"was": 0, "slowms": $, "OK": 1}
Note: To change the size of the secondary system.profile, you must stop secondary, run it as a standalone MongoDB, and then perform the above steps. When you are finished, restart the join replica set.
3
Slow Query
(System.profile)
Description
For more information, see the following example: http://docs.mongodb.org/manual/reference/database-profiler/
3.1: Parameter meaning
{"OP" : "Query", #操作类型 with INSERT, query, update, remove, Getmore, command "NS" : "Onroad.route_model", #操作的集合 "Query" : {"$query" : {"user_id"  : 314436841, "Data_time" : {"$gte" : 1436198400}}, "$orderby" : {"Data_time" : 1}}, the number of documents for the "Ntoskip" : 0, #指定跳过skip () method . "Nscanned" : 2, #为了执行该操作, the number of documents that MongoDB browses in index . Generally, if the nscanned value is higher than the nreturned value, the database scans many documents in order to locate the target document. You can consider creating indexes to improve efficiency. "Nscannedobjects" : 1, #为了执行该操作, MongoDB the number of documents browsed in collection. "Keyupdates" : 0, #索引更新的数量, changing an index key with a small performance overhead, because the database must delete the old key and insert a new key into the B-Tree index "Numyield"  : 1, #该操作为了使其他操作完成而放弃的次数. Typically, operations are discarded when they need access to data that is not fully read into memory. This allows the data to be read by MongoDB in order to discard the operation, while there are other operations in memory that can complete "lockstats" : { #锁信息, R: Global read lock, W: Global write lock; R: Read lock for a particular database ; W: Write lock for a specific database "Timelockedmicros" : { #该操作Gets the time that a level lock takes. For operations that request multiple locks, such as a local database lock to update oplog , the value is longer than the total length of the operation (that is, millis ) "R" : Numberlong (1089485), "W" : numberlong (0)}, "Timeacquiringmicros" : { # The time that the operation waits to acquire a level lock. "R" : numberlong (102), "W" : numberlong (2)}, "nreturned" : 1, // Returns the number of documents "ResponseLength" : 1669, // returns the byte length, if this number is large, consider the value returned by the desired field "Millis" : 544, #消耗的时间 (ms) "Execstats" : { #一个文档, which contains an action to perform query , and for other operations, this value is an empty file, system.profile.execstats shows a statistical structure like a tree, where each node provides query operations during the execution phase. "Type" : "LIMIT", # #使用limit限制返回数 "Works" : 2, "yields" : 1, " Unyields " : 1," invalidates " : 0," Advanced " : 1," Needtime " : 0," Needfetch " : 0," IsEOF " : 1, #是否为文件结束符" Children " : [{" type " : "FETCH", #根据索引去检索指定document "Works" : 1, "YiELDs " : 1," Unyields " : 1," invalidates " : 0," Advanced " : 1," NeedTime " : 0, "Needfetch" : 0, "IsEOF" : 0, "Alreadyhasobj" : 0, "forcedFetches" : 0, "matchtested" : 0, "Children" : [{"type" : "IXSCAN", #扫描索引键 " Works " : 1," yields " : 1," Unyields " : 1," invalidates " : 0," advanced " : 1, "Needtime" : 0, "Needfetch" : 0, "IsEOF" : 0, "Keypattern"  : "{ user_id: 1.0, data_time: -1.0 }", "Boundsverbose" : "field #0 [ ' user_id ']: [314436841, 314436841], field #1 [' data_time ']: [1436198400, inf.0] ', "Ismultikey" : 0, "Yieldmovedcursor" : 0, "dupstested" : 0, "dupsDropped"  : 0, "seeninvalidated" : 0, "matchtested" : 0, "keysexamined" : 2, "children" ( : [ ]}]}]},"TS" : isodate ("2015-10-15t07:41:03.061z"), #该命令在何时执行 "Client" : "10.10.86.171", #链接ip或则主机 "AllUsers" : [{"user" : "martin_v8", "DB" : "Onroad"}], "User"  : "[email protected]"}
This article is from "a struggling small operation" blog, please be sure to keep this source http://yucanghai.blog.51cto.com/5260262/1705195
The slow query analysis of MongoDB