1: To know which operations slowed down MONGODB, first you need to check which operations are currently being performed.
Gechongrepl:primary> Db.currentop ()
"Opid": 78891, #操作的唯一标识符, can be used to terminate the operation "active": true, #true表示当前正在运行, false indicates that this operation has been surrendered or waiting for other operations to hand over the lock "secs_running": 1, #查看执行时间, This parameter can be used to locate the time-consuming operation "microsecs_running": Numberlong (1081719), # "OP": "Getmore", #操作的类型, with query, insert, UPDATE, delete. Database command when query processing "ns": "local.oplog.rs", "query": {}, "client": "192.168.91.132:55738", "desc": "conn1534", #可与日志信息联系起来, Can be used to filter the relevant log information. "ThreadId": "0x7f91d77a5700", "ConnectionID": 1534, "Waitingforlock": false, #表示该操作是否因正在等待其他操作交出锁而处于阻塞状态. "Numyields": 0, #表示该操作交出锁, while allowing other operations to run "Lockstats": {"Timelockedmicros": {"R": Numberlong (), "W": Numberlong (0)}, " Timeacquiringmicros ": {#表示该操作需要多长时间才能取得所需的锁" R ": Numberlong (7)," W ": Numberlong (0)}}
Conditional query
Gechongrepl:primary> db.currentop ({"ns": "Local.oplog.rs"})
Here's an example: Ns:local.oplog.rs is not normally terminated because the copied thread continues to request more operations to the synchronization source. If terminated abnormally, MongoDB will restart them, but will temporarily interrupt replication.
If you find that a particularly time-consuming query can terminate the operation
Gechongrepl:primary> Db.killop (5299)
The update and find and remove operations can be terminated. Because they have surrendered the lock. An operation that is occupying a lock or waiting for another operation to surrender the lock cannot be terminated.
Weird: Terminated a batch INSERT statement, and finally found or inserted completed. Because the bulk insert operation is requested, it is written in the buffer. MongoDB handles write operations to buffers even if the client sends the request. The best way to resolve this behavior is to use an answer-write, which means that each write will wait until the last write operation is complete.
2: Open System Analyzer
To view the current parser level:
Gechongrepl:primary> Db.getprofilinglevel ()
Gechongrepl:primary> Db.setprofilinglevel (2) {"was": 0, "slowms": +, "OK": 1}
Db.system.profile.find (). Pretty ()
Setting the level to 2 means that the parser will log all content. All read and write requests to the database are written to System.profile. This can result in a loss of performance because each write operation increases the additional write time, and every read operation waits for a write lock (because it must write a record in the System.profile collection)
Gechongrepl:primary> Db.setprofilinglevel (1,100) {"was": 2, "slowms": +, "OK": 1}
Db.system.profile.find (). Pretty ()
The Level 1 parser defaults to recording operations that take more than 100ms. Threshold values can be customized
Gechongrepl:primary> Db.setprofilinglevel (1,500) {"was": 1, "slowms": +, "OK": 1} db.system.profile.find (). Pretty ()
Gechongrepl:primary> db.setprofilinglevel (0) {"was": 1, "SLOWMS": $, "OK": 1}
0 is to close the parser
3: Calculate the space consumption of MongoDB
_id storage as Objectid type, more efficient than storage for string types
Gechongrepl:primary> object.bsonsize ({_id:objectid ()}) 22gechongrepl:primary> Object.bsonsize ({_id: "" + ObjectId ()}) 39
You can also query the documents in the collection directly:gechongrepl:primary> object.bsonsize (Db.users.findOne ()) 0
View collection Size
Display in megabytes
Gechongrepl:primary> Db.test.stats (1024*1024) {"ns": "Test.test", "Count": 239000, "size":, "avgobjsize": 82, " Storagesize ": $," numextents ": 8," nindexes ": 1," lastextentsize ": +," Paddingfactor ": 1," systemflags ": 1," UserFlags " : 1, "totalindexsize": 7, "indexsizes": {"_id_": 7}, "OK": 1}
Database size:
Gechongrepl:primary> db.stats () {"DB": "Test", "collections": 6, "Objects": 239035, "avgobjsize": 82.56948145669044, "DataSize": 19736996, "storagesize": 38879232, "numextents": "Indexes": 3, "indexsize": 7472864, "fileSize": 67108864 , "NSSIZEMB": +, "datafileversion": {"Major": 4, "minor": 5}, "Extentfreelist": {"num": 0, "TotalSize": 0}, "OK": 1}
4: View with Mongostat and Mongotop
MongoDB Replica Set Configuration series Six: Reasons to locate MongoDB slow