Create a connection
>>> Import pymongo
>>> Connection = pymongo. Connection ('localhost', 27017)
Switch Database
>>> DB = connection. test_database
Get collection
>>> Collection = dB. test_collection
DB and collection are created in a delayed manner. They are created only when document is added.
Add document, _ id automatically created
>>> Import datetime
>>> Post = {"author": "Mike ",
... "Text": "My first blog post! ",
... "Tags": ["MongoDB", "Python", "pymongo"],
... "Date": datetime. datetime. utcnow ()}
>>> Posts = dB. Posts
>>> Posts. insert (post)
Objectid ('...')
Batch insert
>>> New_posts = [{"author": "Mike ",
... "Text": "another post! ",
... "Tags": ["Bulk", "insert"],
... "Date": datetime. datetime (2009, 11, 12, 11, 14 )},
... {"Author": "Eliot ",
... "Title": "MongoDB is fun ",
... "Text": "And pretty easy too! ",
... "Date": datetime. datetime (2009, 11, 10, 10, 45)}]
>>> Posts. insert (new_posts)
[Objectid ('...'), objectid ('...')]
Get all collections (equivalent to SQL show tables)
>>> DB. collection_names ()
[U 'posts', u 'System. indexes']
Obtain a single document
>>> Posts. find_one ()
{U 'date': datetime. datetime (...), u 'text': u'my first blog post! ', U' _ id': objectid ('... '), u'author': u'mike', u 'tags': [U 'mongodb ', u 'python', u 'pymongo']}
Query multiple documents
>>For post in posts. Find ():
... Post
...
{U 'date': datetime. datetime (...), u 'text': u'my first blog post! ', U' _ id': objectid ('... '), u'author': u'mike', u 'tags': [U 'mongodb ', u 'python', u 'pymongo']}
{U 'date': datetime. datetime (2009, 11, 12, 11, 14), u 'text': u'another post! ', U' _ id': objectid ('... '), u'author': u'mike', u 'tags': [U 'bulk', u'insert']}
{U 'date': datetime. datetime (2009, 11, 10, 10, 45), u 'text': U' and pretty easy too! ', U' _ id': objectid ('... '), u'author': u'eliot', u'title': u'mongodb is fun '}
Conditional Query
>>> Posts. find_one ({"author": "Mike "})
Advanced Query
>>> Posts. Find ({"date": {"$ lt": d}). Sort ("author ")
Count
>>> Posts. Count ()
3
Add Index
>>> From pymongo import ascending, descending
>>> Posts. create_index ([("date", descending), ("author", ascending)])
U'date _-hour author_1'
View query statement Performance
>>> Posts. Find ({"date": {"$ lt": d}). Sort ("author"). Explain () ["cursor"]
U'btreecursor date _-hour author_1'
>>> Posts. Find ({"date": {"$ lt": d}). Sort ("author"). Explain () ["nscanned"]
2
Note that you should be careful with your summary for your reference only.
Disadvantages
Instead of replacing the entire tree in a traditional database (nosqlfan: Can it replace an application scenario that needs to be viewed), it does not support complex transactions (nosqlfan: MongoDB only supports atomic operations on a single document, not easy to search, 4 MB limit? (Nosqlfan: version 1.8 has been changed to 16 MB)
Features(Nosqlfan: many of the features listed here are just some of the characteristics of the surface layer ):
In a document-based database, the table structure can be embedded with no mode, avoiding the overhead of null fields (schema free) distributed query support regular expressions dynamic expansion the 32-bit version can only store up to GB of data (nosqlfan: the maximum file size is 2 GB, and 64-bit is recommended in the production environment)
Noun correspondence
A data item is called document (nosqlfan: corresponding to a single record in MySQL). One document is embedded in another document (comment Embedded Post). The data stored in embed is called collections (nosqlfan: corresponding to a table in MySQL) table Association, called reference