"Database Commands"
"How the Command Works"
The commands in MongoDB are actually implemented as a special type of query that is executed against the $cmd collection. RunCommand simply accepts the command document and executes the equivalent query, so
> Db.runcommand ({"Drop": "Test"})
The drop call is actually like this:
Db. $cmd. FindOne ({"Drop": "Test"})
When the MongoDB server gets a request to query the $cmd collection, it starts a set of special logic to handle, rather than handing it over to the normal query code. Almost all MongoDB drivers provide a helper method similar to RunCommand to execute commands, but if necessary, you can always run a command using a simple query.
Access to some commands requires administrator privileges and must be run in the admin database. If you run such a command in another database, you will get an "Access Denied" error.
"Command Reference"
To live with the latest list of all commands, there are two ways:
• Run Db.listcommands () in the shell, or run the equivalent command listcommands from the driver.
• Browse the Administrator interface Http://localhost:28017/_commands
The most frequently used commands in MongoDB:
· Buildinfo
{"Buildinfo": 1}
Manage dedicated commands, return the version number of the MongoDB server and the host's operating system
· Collstats
{"Collstats": Collection}
Returns statistics for the specified collection, including the size of the data, the allocated storage space, and the size of the index.
· Distinct
{"DISTINCT": Collection, "Key": Key, "Query": query}
Lists all the different values for the specified key of the document in the specified collection that satisfy the query criteria.
· Drop
{"Drop": Collection}
Deletes all data for the collection.
· Dropdatabase
{"Dropdatabase": 1}
Deletes all data for the current database.
· Dropindexes
Deletes the index in the collection named name, and removes all indexes if the name is called "*".
· Findandmodify
Updates the document and returns the document information before or after the update. (This line is summarized by itself)
· GetLastError
View the error message or other status information for the last operation performed on this collection.
· IsMaster
{"IsMaster": 1}
Check whether this server is a pig server or a server.
· Listcommands
{"Listcommands": 1}
Returns all commands and related information that are run on the server.
· ListDatabases
{"ListDatabases": 1}
Manage dedicated commands to list all databases on the server.
· Ping
{"Ping": 1}
Check that the server links are healthy. Even if the server is locked, this command will return immediately.
· Renamecollection
{"Renamecollection": 1, "to": B}
Rename Collection A to B, where both A and B must be full collection namespaces (for example, "Foo.bar" represents the bar collection in the Foo database)
· RepairDatabase
{"RepairDatabase": 1}
Repairing and compressing the current database can be time-consuming.
· Serverstatus
{"Serverstatus": 1}
Returns administrative statistics for this server.
"Fixed set"--fixed-size collection
Fixed collections are created and fixed in size.
Fixed collections are like ring queues, and if there is not enough space, the earliest documents are deleted, freeing up space for new documents. This means that the fixed collection is inserted in the new document and the oldest document is automatically retired.
Some operations do not apply to fixed collections. The document cannot be deleted (in addition to the previously mentioned auto-culling), the update must not cause the document to move (usually the update means the size increases).
Based on the above two points, you can guarantee that the documents in the fixed collection are stored in the order in which they are inserted, and that you do not have to maintain a list of the freed spaces for a deleted document.
Fixed collections are not indexed by default, even on "_id".
"Properties and Usage"
The ① is very fast to fix a box insertion speed. Because there is no additional space to allocate.
② the query output in order of insertion is extremely fast.
The ③ fixed set can automatically retire the oldest data when new data is inserted.
Insert fast, query in insert order fast, auto-obsolete-fixed sets are particularly suited for applications like logs.
In fact, the purpose of designing a fixed collection in MongoDB is to store the internal replication log oplog.
"Create a fixed collection"
Unlike normal collections, fixed collections must be explicitly created before they are used. Created using the Create command. In the shell, you can use CreateCollection to create:
> db.createcollection ("My_collection", {capped:true, size:100000});
{"OK": true}
The above command creates a fixed set of my_collection, which is 100000 bytes in size. CreateCollection also has other options. In addition to specifying the total capacity, you can also specify the maximum number of documents:
> db.createcollection ("My_collection", {capped:true, size:100000, max:100});
{"OK": true}
Note: When specifying a limit on the number of documents, you must specify both the size. The elimination mechanism works only when the volume is not full before returning a document quantity. If the capacity is full, the elimination mechanism will be based on capacity to work, like other fixed set.
You can also create a fixed collection by converting the existing normal few cores. Use the converttocapped command to complete this operation. In the following example, the test collection is converted to a fixed set of size 10000 itself:
> Db.runcommand ({converttocapped: "Test", size:10000});
{"OK": true}
"Natural Sort"
Use natural sorting to return the document in either the insertion order or the reverse insertion order.
> Db.my_collection.find ()/sort ({"$natural":-1})
Using {"$natural": 1} means the same as the default order.
"Tail cursor"
A trailing cursor is a special persistent cursor that is not destroyed after it has no results. The cursor is inspired by the TAILF command, which similarly takes the resulting output as consistently as possible. Because such cursors are not destroyed after they have no results, they are retrieved and exported once a new document is added to the collection. Trailing cursors can only be used on fixed sets.
The Mongo shell does not support trailing cursors, but you can look at examples in PHP:
$cursor = $collection ->find ()- >tailable (); while (true (! $cursor ->hasnext () { if ( Span style= "color: #800080;" > $cursor ->dead ()) {reak; sleep (1 else { while (Cursor->hasnext ()) {do_stuff (cursor ->getnext ()); } }}
The cursor is not destroyed, either processing the result or waiting for more results.
"Gridfs: Storing Files"
"Get Started with Gridfs:mongofiles"
Mongofiles is built into the MongoDB release and can be used to upload, download, list, find, or delete files in Gridfs.
Perform mongofiles--help to see available options.
Here's how to use Mongofiles to upload files from the file system to Gridfs, list all the files in the Gridfs, and download the files you just uploaded:
$ echo "Hello, World" > Foo.txt
$./mongofiles put Foo.txt
Connected to:127.0.0.1
Added file: {_id:objectid (' 4c0d2a6c3052c25545139b88 '),
FileName: "Foo.txt", Length:13, chunksize:262144,
Uploaddate:new Date (1275931244818),
MD5: "a7966bf58e23583c9a5a4059383ff850"}
done!
$./mongofiles List
Connected to:127.0.0.1
Foo.txt 13
$ RM foo.txt
$./mongofiles Get Foo.txt
$ cat Foo.txt
Hello, World
In the above example, the 3 basic operations of Mongofiles are used: Put, list, and get. Put adds a file in the file system to the Gridfs list, which lists all the files added to the Gridfs, and the get is the inverse of the put, which writes the files in the Gridfs to the file system. Mongofiles also supports two other actions: Search is used to find files in Gridfs by file name, and delete Deletes a file from Gridfs.
"Manipulating Gridfs through the MongoDB driver"
Example: Using MongoDB's Python driver Pymongo, you can implement a series of operations performed with Mongofiles above:
>>> from Pymongo import Connection
>>> Import Gridfs
>>> fs = Connection (). Test
>>> fs = Gridfs. GRIDFS (DB)
>>> file_id = Fs.put ("Hello, World", Filename= "Foo,txt")
>>> fs.list ()
[u ' foo.txt ']
>>> Fs.get (file_id). Read ()
' Hello, World '
"Internal principle"
One of the basic ideas of Gridfs is that large files can be divided into chunks, each stored as a single document, so that the village can be large files. Because MongoDB supports storing binary data in documents, the storage overhead of the block can be minimized. In addition, in addition to storing blocks of the file itself, there is a separate document that stores the chunked information and metadata for the file.
The GRIDFS block has a separate collection. By default, blocks will use the Fs.chunk collection, which can be overridden if necessary. The structure of the document within this block collection is very simple:
{
"_id": ObjectId ("..."),
"N": 0,
"Data": Bindata ("..."),
"files_id": ObjectId ("...")
}
Like other MongoDB documents, it has its own unique "_id".
"FILES_ID" is the "_id" of the file document containing this block metadata.
"N" denotes the block number, which is the sequential number of the block in the source file.
"Data" contains the binary data that makes up the block of files.
The metadata for the file is placed in another collection, which is fs.files by default. Each of these documents represents a file in Gridfs, and the custom metadata associated with the file can also exist.
In addition to user-defined keys, the GRIDFS specification defines some keys:
_id
The unique ID of the file, stored as the value of the "files_id" key in the block.
Length
The total number of bytes in the file content.
ChunkSize
The size of each block, in bytes. The default is 256K, which can be adjusted if necessary.
Uploaddate
Timestamp when the file was deposited in Gridfs.
Md5
The MD5 checksum of the contents of the file, generated by the server side.
After understanding the GRIDFS specification, it is easy to implement some of the features not provided by the driver. For example, you can use the distinct command to get the list of file names that are not duplicated in Gridfs:
> db.fs.files.distinct ("filename")
["Foo.txt"]
"Server-side scripting"
JavaScript scripts can be executed on the server side via the Db.eval function. You can also save JavaScript scripts in the database and then call them in other database commands.
"Db.eval"
With Db.eval, you can execute arbitrary javascripe scripts on the server side of MongoDB.
This function first transmits the given JavaScript string to MongoDB (executed here) and returns the result.
Db.eval can be used to simulate multi-document transactions: Db.eval locks The database, executes JavaScript, and then unlocks.
The sending code has two choices, either encapsulated in a function or not encapsulated. The following two lines of code are equivalent:
> Db.eval ("Return 1;")
1
> Db.eval ("function () {return 1;}")
1
A function must be encapsulated only when parameters are passed. The arguments are passed through the second argument of Db.eval and are written as an array. For example, if you want to pass a username to a function, you can do this:
> Db.eval ("function (u) {print (' Hello, ' + U + '! ');}", [Username])
Multiple parameters can be passed if necessary. For example, to calculate the number of 3 numbers, you can do this:
> Db.eval ("function (x, y, z) {return x + y + z;}", [Num1, Num2, num3])
The NUM1 corresponds to the x,num2 corresponding to the y,num3 corresponding Z.
One way to debug db.eval is to write debug information into the database log, which can be done through the print function:
> Db.eval ("Print (' Hello, World ');");
"Store JavaScript"
Each MongoDB database has a special collection called System.js, which is used to store JavaScript variables. These variables can be called in any MongoDB JavaScript context, including "$where", Db.eval calls, and mapreduce jobs. You can add variables to the system.js by using insert.
> Db.system.js.insert ({"_id": "X", "Value": 1})
> Db.system.js.insert ({"_id": "Y", "Value": 2})
> Db.system.js.insert ({"_id": "Z", "Value": 3})
The example above defines x, Y, z in global use. Now if you want to sum it up, you can do this:
> Db.eval ("Return x+y+z;")
6
In addition to some simple values, system.js can also be used to store JavaScript code. This makes it easy to customize some utilities. For example, to write a log function in JavaScript, you can store it in System.js:
> Db.system.js.insert ({"_id": "Log", "Value" : function(msg, level) { var levels = ["DEBUG", "WARN", "ERROR", "FATAL"] // Check if level is defined varNew Date (); + "" + Levels[level] + msg);} )
You can now call this function in any JavaScript program:
> Db.eval ("x = 1; Log (' x is ' +x); x = 2; Log (' x is greater than 1 ', 1); ");
The database log will contain something like the following:
Fri June 11:12:39 GMT-0400 (EST) DEBUG x is 1
Fri June 11:12:40 GMT-0400 (EST) WARN x is greater than 1
The disadvantage of using the stored JavaScript is that the code is detached from the regular source control, disrupting the JavaScript that the client sends.
The best way to use stored JavaScript is to use a JavaScript function in a program where there are multiple places (and possibly different programs, or code in different languages). Put such a function in a central location, and if there is an update, you do not have to modify it every place. If the JavaScript code is long and frequently used, you can also use the stored JavaScript, which saves a lot of network transmission time.
Security
If you want to print "Hello, username!" to the user. The user name is stored in a variable named username. This program can be written as follows:
> func = "fucntion () {printf (' Hello, ' +username+ '! ');}"
If username is a user-defined, it could be the string "'); Db.dropdatabase (); Print (' ", so the code becomes the following:
> func = "fucntion () {printf (' Hello, '); Db.dropdatabase (); Print ('! '); }"
The entire database has been cleaned!
In order to avoid this situation, the scope must be set first. For example, it should be written in PHP:
$func = new Mongocode ("function () {print (' Hello, ' +username+ '! ');}", Array ("username" = $username));
The data can safely output the following characters:
Hello, '); Db.dropdatabase (); Print ('!
"Database Reference"
Database references, also known as Dbref. Dbref is like a URL that uniquely identifies a reference to a document. It automatically loads documents just as URLs in Web sites automatically load Web pages through links.
"What is Dbref?"
Dbref is an inline document, just like any other inline document in MongoDB. But Dbref has some key options. The following is a simple example:
{"$ref": Collection, "$id": Id_value}
Dbref points to a collection, and a id_value is used to determine the unique document within the collection based on "_id". These two messages enable Dbref to uniquely identify any document within the MongoDB database. If you want to refer to a document in another database, there is an optional key "$db" in Dbref, which you can use:
{"$ref": Collection, "$id": Id_vlue, "$db": Database}
Note: The order of keys in dbref cannot be changed, the first must be "$ref", followed by "$id" and then (optional) "$db".
"Sample Mode"
Consider an example that uses DBREF to reference a document across collections:
This example contains two collections of users and notes. Users can create notes (note), which can refer to users or other notes. Now there are user documents, each with a unique user name as its "_id", and a "display_name" that acts independently:
{"_id": "Mike", "Display_name": "Mike D"}
{"_id": "Kristina", "Display_name": "Kristine C"}
The Notes collection is slightly more complex. Each note contains a unique "_id". Under normal circumstances, this "_id" is likely to be a objectid, but try to use the integer, is to let the example concise, highlighting the focus. Notes also has a "author", a number of "text", and an optional "references" point to other notes or users:
{"_id": 5, "author": "Mike", "text": "MongoDB is fun!"}
{"_id": "Author": "Kristina", "text": "... and DBRefs is easy, too",
"References": [{"$ref": "Users", "$id": "Mike"}, {"$ref": "Notes", "$id": 5}]}
The second note contains references to other documents, each of which is stored as a dbref. Application layer programs will take advantage of these dbref to get the user "Mike" and Notes "MongoDB is fun!" These two documents, which are all associated with Kristina's notes. The reference is easy to implement. The value of "$ref" is the collection to query, and then uses the value of the "$id" key to get the value of "_id":
var note = Db.notes.findOne ({"_id": $})> Note.references.forEach (function(ref) { Printjson (db[ref. $ref].findone ("_id" : Ref. $id)); });
{"_id": "Mike", "Display_name": "Mike D""_id": 5, "author": "Mike", "text": "MongoDB is fun!"}
"Driving support for Dbref"
Not all drivers use dbref as a normal inline document. Some drivers provide special types for dbref, which automatically convert to and from normal documents. This is primarily for the convenience of developers, as this can omit a small amount of detail. For example, using the Dbref type in Pymongo can represent the above example:
>>> Note = {"_id": 20,"author":"Kristina", "text":"... and DBrefs is easy, too", "References": [Dbred ("User","Mike"), Dbref ("Notes", 5)]}
When saved, the Dbref instance is automatically converted into an equivalent inline document. When returned as a result of the query, the inverse operation will also be automatic, and then the Dbref instance is obtained.
Some drivers also add other driver tools to manipulate dbref, such as the method of referencing, and even provide a mechanism for automatically referencing when a return result contains a reference. These accessibility features vary depending on the driver, and you need to refer to the specific driver documentation for the latest information.
"When should I use Dbref?"
When storing references to documents of different cores, it is best to use Dbref. Or if you want to use a driver or a tool that features unique dbref, you can only use Dbref.
Otherwise, it's best to store "_id" as a reference, because it's leaner and easier to manipulate.
MongoDB Learning Note Six: Advanced operations