Common MongoDB commands and examples (3). Common mongodb commands

Source: Internet
Author: User
Tags mongodb add mongodb commands mongodb query mongodb sharding time in milliseconds

Common MongoDB commands and examples (3). Common mongodb commands

Common commands and advanced commands
 
I. Details of advanced applications for addition, deletion, modification, and query: Add: * ** if c1 is not inserted at the time of insertion, it will be automatically created. ** no matter which record is inserted, a unique insert and save with a value id auto-incrementing primary key (when the id conflict is modify, otherwise, insert.) 1. The id primary key does not conflict with each other. c1.insert ({name: "leyangjun"}); db. c1.insert ({name: "leyangjun"}); the IDs of insert values are unique and do not conflict with the db primary key id. c1.save ({name: "leyangjun"}); the id of the inserted record is the same as that of the insert statement. the id of the inserted record is the unique primary key. 2: id primary key conflict (update if the conflict occurs) db. c1.insert ({_ id: 1, name: "leyangjun2"}); db. c1.insert ({_ id: 1, name: "leyangjun2"}); an error is returned when the same value is inserted. c1.save ({_ id: 1, name: "leyangju N3 "}); -- the insert is successful, and the id conflict is updated. Changing idname = leyangjun2 to leyangjun3 can also be inserted (flexible): * it is best not to play like this, the consequence is that it is troublesome for you to set the value in the php program. c1.insert ({name: "user1", post: {tit: 1, cnt: 1111}); -- the value is subjsondb. c1.insert ({name: "user1", post: [1, 2, 3, 4, 5, 6]}); -- the value is an array that can be JavaScript json or insert 10 entries cyclically: for (I = 1; I <= 10; I ++) {db. c1.insert ({name: "user" + I});} Delete: db. c1.remove (); Delete All null JSON = ({}). If it is json, delete all databases. c1.remove ({name: "user1"}); -- specify to delete query: db. c1.find () ;==( {}) query all databases. c1. Find ({name: "user3"}); condition query scenario: if multiple columns exist in a record, only the specified column (but _ id will be included by default) must be 1, not 0db. c1.insert ({name: "leyangjun", age: 23, sex: "nan"}); db. c1.find ({name: "leyangjun"}, name: 1); only the db column name of this record is used. c1.find ({name: "leyangjun"}, name: 1, age: 1); Take the name, age Column 2 db. c1.find ({name: "leyangjun"}, name: 1, _ id: 0); do not use the default _ id column conditional expression for query: 1): <, <=,>, >=-- $ gt is greater than, $ lt is less than, $ gte is greater than or equal to, and $ lte is less than or equal to insert 10 for testing (I = 1; I <= 10; I ++) {db. c1.insert ({name: "user" + I, age: 1})} db. c1. Find ({age: $ gt: 5}); $ gt older than 5 is greater than db. c1.find ({age: $ lt: 5}); $ lt is less than db if the age is less than 5. c1.find ({age: $ gte: 5}); $ gte of age greater than or equal to 5 is greater than or equal to db. c1.find ({age: $ lte: 5}); $ lte of the age of less than or equal to 5 is less than or equal to db. c1.find ({age: 5}); db of age 5. c1.find ({age: $ ne: 5}); number of records in the statistics for age not equal to 5: db. c1.count (); or db. c1.find (). count (); sort: db. c1.find (). sort ({age: 1}) --- 1 indicates the ascending db. c1.find (). sort ({age:-1}) ----1 is the descending limit and combined skip paging: db. c1.find (). limit (3); --- get 3 records from 0 db. c1.fi Nd (). skip (1 ). limit (5); --- skip one and take 5 (that is, take a few from the very beginning), 2, 4, 5, 6 dB. c1.find (). sort ({age:-1 }). skip (1 ). limit (5 ). count (0); --- the default count is 0. It does not look at the conditions written earlier. If there are several rows, it counts several DBS. c1.find (). sort ({age:-1 }). skip (1 ). limit (5 ). count (1); --- count the number of records according to the preceding conditions. 2): $ all, which indicates that a value is included. db is mainly used for arrays. c1.insert ({name: "user"}, post: [1, 2, 3, 4, 5, 6]); db. c1.find ({post :{$ all: [, 3]}); -- find out the values of post that contain, 3. If the value is false, 3 is not found ): $ exists operation checks whether a field has db. c1.find ({age: {$ exist S: 1}); --- check whether the field age4 is included): $ mod to retrieve the remaining db. c1.find ({age: {$ mod: []}); --- obtain the, 9, and pairs of more than 2 ..... db. c1.find ({age: {$ mod: []}); --- obtain the 2, 4, 6, 8, 10 of the 2 + 0 ..... 5): $ ne is not equal to db. c1.find ({age: $ ne: 5}); age is not equal to 6 of 5): $ in and $ nin are similar to in, not indb in traditional relational databases. c1.find ({age :{$ in: [, 5] }}); --- age equals to values 1, 3, and 5 (is the array an operator?) B. c1.find ({age :{$ nin: [, 5]}); --- age is not equal to the * Rule of, 5, 5] operator is not array 7): $ or, $ nor (the opposite) db. c1.find ({$ or :[ {Name: "user2" },{ name: "user3"}]}); -- find the record db with name = user2 or name = user3. c1.find ({$ nor: [{name: "user2" },{ name: "user3"}]}); -- filter out the database records whose names are user2 and user3. c1.find ({$ or: [{name: "user2" },{ age: 8 },{ age: 10}]}); -- find the record db with name = user2, age = 8, or age = 10. c1.find ({$ nor: [{name: "user2" },{ age: 8 },{ age: 10}]}); ---- find the name! = User2 or age! = 8 or age! = 10 records 8): $ size to find the number of field array values-> specifically operate the database on the array. c1.insert ({name: "user1", post: [1, 2, 3, 4, 5, 6]}); the first record db. c1.insert ({name: "user1", post: [, 9]}); the second record db. c1.find ({post: $ size: 3}); -- the second record is found, and the number in post is 39) * *** Regular Expression ****** is the same as js Regular Expression: db. c1.find ({name:/user/I}); --- find 10 of the names containing users): DISTINCT is similar to distinctdb in relational databases. c1.insert ({name: "leyangjun"}); db. c1.insert ({name: "leyangjun"}); db. c1.distinct ("name"); --- Remove the name value and find a record 11 ): $ ElemMatch matches db. c3.insert ({name: "user1", post: [{tit: 1 },{ tit: 2 },{ tit: 3}]}); db. c3.insert ({name: "user2", post: [{tit: "aaa" },{ tit: "bbb" },{ tit: "ccc"}]}); find the record db with tit = 2. c3.find ({"post. tit ": 2}); -- you can find the db. c3.find ({post :{$ elemMatch: tit: 2 }}); -- this method matches db. c3.find ({post :{$ elemMatch: tit: 2, tit: 3 }}); -- match record 12 with tit = 2 tit = 3 ): the concept of cursor (rarely used) var x = db. c1.find (); x. hasNext (); -- whether there is a value. true and false are returned. If true, connect to the database to obtain the value. Otherwise, link to the database x. ne Xt () -- returns a value x. next () x. next () x. next () --- if there is a value, it will continue to take x. hasNext (); --- if it is FALSE, there will be no value 13): the null query (no value, the value is NULL, the value is null) matches the db with age = null. c4.find ({age: null}); --- this matching is not allowed. db will be found if there is no age field in the record. c4.find ({age :{$ exists: 1, $ in: [null]}); -- first checks whether the filtering age exists. If the matching age is null or db. c4.find ({age: $ type: 10}); --- 10 is null. We recommend that you use nulldb. c4.find ({age: $ type: 10, name: "user1"}); -- age = null and name = user114): $ slice-> only for Array db. c3.insert ({name: "user1", pos T: [{tit: 1 },{ tit: 2 },{ tit: 3}]}); db. c3.find ({name: "user1", {post: $ slice: 2}); --- take name = user1, the first two posts, that is, the values of the post field corresponding to the array: 1, 2 dB. c3.find ({name: "user1", {post: $ slice:-2}); --- take name = user1, post the last two, the value of the post field corresponds to 2, 3 dB of the array. c3.find ({name: "user1", {post: $ slice: [1, 2]}), --- get the following two strings from the first row and paste them to the corresponding array: 2, 3, and change: the default update syntax is 0, 0 -- the next two parameter values db. collection. update (criteria, objNew, upsert, multi); parameter description: criteria: object used to set query conditions objnew: Used to set update content object ups Ert: if the record already exists, update it. Otherwise, add a new record multi: if there are multiple qualified records, update all. Note: by default, the first matching record is changed. Example: db. c1.insert ({name: "leyangjun", age: 23); db. c1.update ({name: "leyangjun"}, {sex: "nan"}); -- in this case, delete name and age, and only the sex field value db is left in the record. c1.update ({name: "user6" },{ name: "user66"}, 1); -- 1 indicates that if user6 exists, it is changed to user66, otherwise, if user6 does not exist, a new record name = user66 will be added. 4th parameters can only be used with magic variables to use $ set to update databases in batches. c1.update ({name: "user6" },{$ set: {name: "user66" }},); -- update all names = user6 to user662 ): $ Set add fields or modify Field Values in batches (If yes, update or not) db. c1.update ({name: "user6" },{$ set: {name: "user66" }}, 0, 1); -- modify the name value in db in batches. c1.update ({name: "user10" },{ $ set: {age: 10 }}, 103); -- add the age field to all names = user1 and the value is): $ inc --> the meaning of increment auto-increment. If the field has a plus or minus value (set by yourself), add a scenario if no value exists: now in the big promotion, I want to give 5 points to the member, however, some members in the point field do not have some Members. If you want to do not have the point field, you have to add 5 points, and $ inc can achieve db. c1.insert ({name: "leyangjun", age: 23, score: 11}); db. c1.insert ({name: "leyangjun", age: 23, score: 1}); db. c1.in Sert ({name: "leyangjun", age: 23}); -- add credit db. c1.update ({},{ $ inc: score: 10},); -- {} indicates that all users add 10 points without the score field, $ inc adds db. c1.update ({name: "user1" },{$ inc: score: 10}); add or subtract db. c1.update ({name: "user1" },{ $ inc: score:-10}); * both set and $ inc can add fields, but $ inc must be integer 4): $ unset Delete field (the built-in _ id field cannot be deleted) db. c5.update ({},{ $ unset: {score: 1},); -- 1 indicates true. The score field in all records is deleted from the database. c5.update ({},{$ unset: {score: 1, age: 1 },0, 1); -- delete multiple 5): $ push needle Add elements to the array (the update magic method is usually on the outside of the field, and the query is on the inside) db. c3.insert ({name: "user1", arr: [1, 2, 3]}); db. c3.update ({name: "user1" },{$ push: {arr: 4 }}); -- add an element to the arr of name = user1, remember that you cannot insert multiple values at the same time (you can press the array, that is, you cannot press multiple values at the same time) 6): $ pop removes the last element from the field, for the array db. c3.update ({name: "user1" },{$ pop: {arr: 1 }}); -- 1 indicates the last one. Remove the last element from the arr database. c3.update ({name: "user1" },{$ pop: {arr:-1 }}); ---1 indicates deleting the first value 7 ): $ pushAll press multiple db values. c3.update ({name: "user1" },{$ push: {arr: [, 6]}); --- press multiple values 8): $ The value of ddToSet is inserted repeatedly (for example, if there are 4 in the value, there will only be 4 in this insert) db. c3.update ({name: "user1" },{$ addToSet: {arr: 4 }}); -- if the value contains 4, it cannot be inserted, if there are no duplicates, insert $ addToSet and $ each to insert multiple: db. c3.update ({name: "user1" },{$ addToSet: {arr :{$ each: [, 9] }}); -- insert multiple 9 ): $ pull: deletes a value in the array, which is used for the array db. c3.update ({name: "user1" },{$ pull: {arr: 5 }}); -- delete the value of 5 in the array. 10): $ pullAll deletes multiple databases at a time. c3.update ({name: "user1" },{$ pullAll: {arr: [2, 4]}); -- delete value: 2,411): $ rename Modify Field name db. c3.upda Te ({name: "user1" },{$ rename: {arr: "post" }}); or db. c3.update ({name: "user1" },{$ rename: {"arr": "post" }}); 12): Special Operation symbol $ db. c3.insert ({name: "user1", arr: [{tit: "php" },{ tit: "java" },{ tit: "linux"}]}); change the value titled linux to db. c3.update ({"arr. tit ":" linux "},{$ set: {" arr. $. cnt ":" linux is very good "}); * conclusion: Scenario 1: db. c1.insert ({name: "leyangjun", age: 23}); var x = find ({name: "user1 "}); x -- press enter has a value of x -- there is no value in press enter, so find itself is a scenario with a cursor 2: var x = findO Ne ({name: "user1"}); x -- Press enter. Data x is a json value. When you output data in the mogodb client, you can see that x -- press enter and data x. sex = "nan" -- you can add fields, but the fields added here are not directly added to the data record x. sex = "score "...... add the x field and save it directly to the database. c1.save (x); -- add the data to the database. c1.insert ({name: "leyangjun", age: 23}); --- add db. c1.remove (); --- Delete (delete all) db. c1.remove ({"name": "leyangjun"}); -- delete the record db whose name is equal to leyangjun. c1.update ({name: "leyangjun"}, {name: "lekey"}); --- modify (if there is an age value, it will be deleted, leaving only the name value) db. c1.update ({name: "ley Angjun "},{$ set: {name:" lekey "}}); -- modify, this change will retain the original value db. c1.update ({name: "lekey" },{$ set: {sex: "nan" }}); -- added value. The name is equal to the value of the field sexdb added to the lekey record. c1.find (); --- query db. c1.find ({"name": "leyangjun"}); ***** differences between a common set and a fixed set ***** common set: the space of a Common set is automatically increased as the value of your json object increases: the maximum value of an append collection on a 32-bit machine is about 483.5 MB, 64-bit is only limited by the system size. 2: capped collection (fixed set) small command remember Bird: show dbs; -- show all database dbs -- show current database show tables; -- display all sets under the current database (that is, display all tables) db. c5.drop (); -- delete the c5 collection db. dropDatabase (); Note: mongodb will create a database and a collection (table) db for you by default when you insert and enter it. createCollection ("c1"); -- manually create a c1 collection (table) db. c1.find (); db. c1.drop (); Brief Introduction: capped collections is a set of excellent performance with a fixed size, with least recent LRU (least recently used) the rules and insertion sequence are processed by age-out (aging removal), and the insertion sequence of objects in the set is automatically maintained. The size must be pre-executed during creation. If the space is used up, the newly added object will replace the oldest object in the set. Always keep the latest data features: the data can be inserted and updated, but the update cannot exceed the collection size, otherwise the update fails. Deleting is not allowed, but you can call drop () to delete all rows in the set. However, after dropping, You need to recreate the set explicitly. On a 32-bit machine, the maximum value of an append collection is about 483.5 MB. On a 64-bit machine, it is limited by the system size (that is, the system limits the file size ). Attribute and Method: advantage attribute 1: Fast insertion speed for a fixed set attribute 2: Query output speed by insertion order attribute 3: ability to insert the latest data, elimination of the earliest data usage 1: storage of log information usage 2: cache a few documents to create a fixed set: createCollection command to create -- size is set to 10 M, if your files exceed 10 MB, they will be automatically deleted (the deletion rule is that old data will be deleted once and so on) db. createCollection ("my_collection", {cappend: true, size: 10000000, max: 5}); creates a fixed set of 'my _ collection', with a size of 10000000 bytes. You can also limit the number of documents. Add the "max: 100" attribute. Note: The maximum document size must be specified. The document limit is to eliminate the full capacity. If it is full, it will be eliminated based on the capacity limit. After you create a set, the corresponding indexSize {"_ id _": xxxx} index id is automatically created for you. system. indexs. find (); -- this will automatically create the corresponding ID for the primary key index for the set you just created here db. c1.stats (); -- check the status value, size, and so on of the Set C1. Note that the index id has an attribute. If capped is set to 1, it indicates that the set is fixed: runCommand command db. runCommand ({converToCapped: "c2", size: 10000000, max: 3}); 3: GridFS (large file upload and download, used to store images and videos: gridFS is a mechanism for storing large binary files in mongodb. The reasons for using GridFS are as follows:-> storing large files, for example, video and HD images-> Use GridFS to simplify requirements-> GridFS directly uses the established replication or sharding mechanism for fault recovery And expansion are easy-> GridFS can avoid system problems caused by files uploaded by users-> GridFS does not produce disk fragments. GridFS uses two tables to store data: files: Binary block containing metadata object chunks containing other related information * to name multiple GridFS as a single database, files and blocks have a prefix. By default, the prefix is fs, so any default GridFS storage will contain fs. files and fs. chunks. You can change the prefix of the Third-room language. Using GridFS program files (where files are stored) program files is a tool used to operate GridFS from the numerator line. Three commands: put (storage) get (get, download) list (list) example :. /program files-h -- view the supported basic parameters. /program files list -- View All files of program files. Now simulate a file and drop it to tar czf unzip sniff.tar.gz unzip sniff -- compress the upload sniff file into a package. /program files put into sniff.tar.gz -- after uploading the package file, you can go to mongo. /mongoshow tables; -- you will find two more sets of fs. files, fs. chunksdb. fs. files. find (); -- View -- the corresponding field description appears: filename: stored file name chunkSize: chun Ks block size uplodaDate: warehouse receiving time md5: md5 code length of the file, in the unit of "Byte" fs. files stores some basic metadata information. The real content is stored in fs. db in chunks. fs. chunks. find (); -- the real file is in this oh exit ;. /program files list -- we can see the package we put on. /program files get started sniff.tar.gz -- download this file and download it to the directory where you are executing the command. /Program Files delete upload sniff.tar.gz -- delete this file * Note: When you delete files under program files, fs. files, fs. there is nothing in the chunks table. 4. Performance 1: index management mongodb provides diverse index support, and index information is stored in system. in indexs, when the _ id field in mongodb is created After an index is created, this index is special and cannot be deleted, but it is out of the capped collection column. 1: Create Index 1: normal index for (I = 1; I <= 10; I ++) {db. c1.insert ({name: "user" + I, age: I});} db. c1.find ({name: "user5 "}). explain (); -- explain parses a statement like MySQL. If there is no index, the entire table is scanned. It can be seen that 1 is in ascending order (default)-1 is in descending order. c1.ensureIndex ({name: 1}); db. c1.ensureIndex ({name: 1}, {background: true}); -- if the data is time-consuming, put it in the background for execution. Add the second parameter to indicate that the database is executed in the background. c1.getIndexKeys (); -- view the simple information db of all fields. c1.getIndexes () -- view the table index details db. c1.find ({name: "user5 "}). explain (); -- after the index is created, you will find that only one row is scanned and not the whole table is scanned. Description 2: unique index db. c1.ensureIndex ({age: 1}, unique: 1); -- create a unique index db for the age field. c1.insert ({name: "user11", age: 10}); -- you will find that the data cannot be inserted, so that age is the unique index. 2. view the index database. c1.getIndexKeys (); -- view the simple information db of all fields. c1.getIndexes () -- View table index details 3: delete the index database. c1.dropIndexes (); -- delete all indexes _ id cannot be deleted. c1.dropIndex ({name: 1}); -- specify to delete name index 2: Performance Optimization explain Execution Plan (depending on the number of affected rows) mongodb provides an explain command to let us know how the system processes query requests. With the explain command, we can well observe how the system uses indexes to accelerate retrieval and optimize indexes accordingly. Profile (similar to MySQL slow query logs) is disabled by default, and the default value is 100 ms db. getProfilingLevel (); -- 0 if it is 0, the slow query log db is not enabled. setProfilingLevel (1); -- 1 indicates that slow query of records is enabled (100 ms by default). -- 2 indicates that all command databases are logged. setProfilingLevel (); -- Method 1: Set the second parameter to the unit of time in milliseconds. Method 2: Set the mongodb optimization solution with slowms when starting mongodb: 1: create an index 2: Limit the number of returned results 3: The fields used for query, not all fields 4: Using capped collectioncapped colleed is 5 more efficient than normal collections: use profiling slow query log three performance monitoring (two built-in tools) 1: Listen sniff records communication records to open two windows. /mon Gosniff -- source net lo A window execution. /mongo -- B window is linked to mongodb. If yes, the operation information of B is recorded in window A. log on, exit, and so on. 2: Using stat monitoring (who accesses, query and delete .....). The/merge stat -- A window is executed. The page is refreshed every second./mongo -- B window is linked to mongodb. The records for adding, deleting, and querying executed by B are monitored in window.
Update MongoDB advanced knowledge later


Who has an apsaradb for mongodb advanced video tutorial?

Are you learning this software?

Who has learned the MongoDB video tutorial?

I recommend a document called "in-depth development of MongoDB applications (basic, Development Guide, system management, cluster and system architecture)" with 22 lessons, it focuses on explaining the common features and advanced features of MongoDB, and comprehensively and deeply analyzes MongoDB from the perspective of actual development. For details, contact me at 1511065175.

MongoDB basics:

Lecture 1: nosql and MongoDB (background of the rise of nosql, Introduction to various nosql databases, and features of MongoDB)
Section 2: MongoDB installation and configuration (MongoDB installation and use, basic system management skills, and web Console usage)
Lecture 3: MongoDB shell (introduces the usage and commands of MongoDB shell, Backup recovery, data import and export)
Lecture 4: Concepts of MongoDB documents, collections, and databases (Introduction to documents, collections, databases, and other basic concepts, database file storage methods, and command Rules)
Lecture 5: Introduction to Mongodb data types (details on MongoDB supported data types)
MongoDB Development Guide:
Lecture 6: MongoDB add, delete, and modify documents (describes commands for adding, deleting, and modifying documents in MongoDB, insertion principle, batch modification, and modifier usage)
Lecture 7: MongoDB query syntax 1 (describes in detail the powerful query functions of MongoDB, and queries by operators such as $ in, $ or, $ ne, $ lt, and $ gt)
Lecture 8: MongoDB query syntax 2 (describes in detail the powerful query functions of MongoDB, such as regular expression query, array query, and embedded document query)
Lecture 9: MongoDB query syntax 3 (detailed description of MongoDB where queries, cursor operations, paging queries, code examples, and cursor details)
Lecture 10: MongoDB index (detailed description of MongoDB index principles, management, index query and analysis tools, and mandatory index usage)
11th Lecture: MongoDB aggregation statistics (describes the MongoDB aggregation statistics function)
Lecture 12th: MongoDB advanced guide-how commands work (describes how database commands work)
Lecture 13th: MongoDB advanced guide-fixed set and GridFS (introducing fixed set and GridFS principles and applications)
Lecture 14th: apsaradb for MongoDB advanced guide-server scripts (Introduction to server scripts dbeval and javascript storage)
MongoDB System Management:
Lecture 15th: Advanced MongoDB System Management Skills 1 (System Monitoring)
Lecture 16th: apsaradb for MongoDB advanced system management skills 2 (Database Security, backup and recovery, and data restoration)
MongoDB cluster and system architecture:
Lecture 17th: MongoDB replication function (describes in detail how to create, manage, and maintain MongoDB master-slave replication)
Lecture 18th: MongoDB replica set function (describes in detail how to create, manage, and maintain MongoDB replica sets)
19th Lecture: MongoDB sharding function (details on MongoDB sharding creation, management, and maintenance)
20th: MongoDB insider (in-depth analysis of MongoDB System Architecture and Data File structure principles)
MongoDB application case:
21st Lecture: Development Based on MongoDB General Account Management System 1
22nd Lecture: Development Based on MongoDB General Account Management System 2

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.