Tutorial
This tutorial is a brief introduction to pymongo and Mongo. After reading this, you should have an understanding of pymongo's basic operations on Mongo.
Prerequisites
Install pymongo and Mongo before you start. Make sure that the following error is not reported when you execute import on the python interactive interface:
>>> import pymongo
You need to have a MongoDB instance that is already running. If you have downloaded and installed it, you can start it as follows:
$ mongod
Establish a connection through the Consumer Client.
The first step to start using pymongo is to createMongoClient
Corresponding to the MongoDB instance. So easy:
>>> from pymongo import MongoClient>>> client = MongoClient()
The above code will connect the default host and port. Of course, you can also specify:
>>> client = MongoClient(‘localhost‘, 27017)
Or use the MongoDB URI format:
>>> client = MongoClient(‘mongodb://localhost:27017/‘)
Obtain a database
A MongoDB instance supports multiple independent databases. When using pymongo, you can accessMongoClient
To access the database:
>>> db = client.test_database
If the database name makes the attribute access method unavailable (similar to test-database), you can also access the dictionary value:
>>> db = client[‘test-database‘]
Get a collection
A Collection refers to a group of objects in MongoDB. It can be considered as a table concept in a relational database. The method for getting a collection is the same as that for getting a database:
>>> collection = db.test_collection
Or dictionary mode:
>>> collection = db[‘test-collection‘]
It should be emphasized that the collection and database in MongoDB are all created in inertia-all the commands we mentioned earlier are not actuallyMongoDB server
Perform any operations. It is not created until the first file is inserted.
Documents)
Data is stored as JSON files in MongoDB. Use a dictionary in pymongo to represent files. For example, the following dictionary can represent a blog post:
>>> import datetime>>> post = {"author": "Mike",... "text": "My first blog post!",... "tags": ["mongodb", "python", "pymongo"],... "date": datetime.datetime.utcnow()}
Note: The Python native type can be saved in the file (datetime.datetime
), These types of values are automatically converted between the native type and the bson format.
File insertion
You can useinsert()
Method:
>>> posts = db.posts>>> post_id = posts.insert(post)>>> post_idObjectId(‘...‘)
After the file is inserted_id
This key value is automatically added to the file. This is a special key value and its value is unique throughout the collection.insert()
Returns_id
Value. For more information about this value, see the documentation on _ id.
After the first file is inserted, the posts collection is actually created on the server. You can verify by viewing all collections in the database:
>>> db.collection_names()[u‘system.indexes‘, u‘posts‘]
The collection named system. indexes is a special internal collection, which is automatically created.
Obtain find_one () from a single file ()
The most basic query in MongoDB isfind_one
. This function returns a file that matches the query, or none if no match exists. This is useful when you know that only one file meets the conditions, or you are only interested in the first file that meets the conditions. The following uses find_one () to obtain the first file in the posts collection:
>>> posts.find_one(){u‘date‘: datetime.datetime(...), u‘text‘: u‘My first blog post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘mongodb‘, u‘python‘, u‘pymongo‘]}
The returned result is a dictionary type value that we inserted previously.
Note that the returned file already has_id
This key value is automatically added.
find_one()
It also supports matching queries on specific elements. Filter out files whose authors are "Mike:
>>> posts.find_one({"author": "Mike"}){u‘date‘: datetime.datetime(...), u‘text‘: u‘My first blog post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘mongodb‘, u‘python‘, u‘pymongo‘]}
If you change the author name, such as "Eliot", you cannot find the result:
>>> posts.find_one({"author": "Eliot"})>>>
Query by objectid
Pass_id
You can also query it. In this example, It is objectid:
>>> post_idObjectId(...)>>> posts.find_one({"_id": post_id}){u‘date‘: datetime.datetime(...), u‘text‘: u‘My first blog post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘mongodb‘, u‘python‘, u‘pymongo‘]}
Note that objectid is not equivalent to its string form.
>>> post_id_as_str = str(post_id)>>> posts.find_one({"_id": post_id_as_str}) # No result>>>
A common task of a Web application is to get objectid in the URL of requset and find the matching file. In this example, you must first convert the string to objectid and then pass it to find_one:
From bson. objectid import objectid # The framework obtains the post_id from the URL, and then transmits it as a string to Def get (post_id): # converts it from a string to objectid: Document = client. DB. collection. find_one ({'_ id': objectid (post_id )})
Articles related to this topic: When I query for a document by objectid in my web application I get no result
A description of Unicode strings
You may have noticed that the conventional Python string used to store data in the database is different from what we retrieve from the database server (for example, u'mike 'rather than 'Mike '). The following is a brief explanation.
MongoDB saves data in the format. bson strings are all UTF-8 encoded, so pymongo must ensure that the string values it stores contain valid UTF-8 data. regular strings () are valid and can be saved without changing. Unicode string () needs to be first encoded into the UTF-8 format. in this example, the string is displayed as u'mike 'rather than 'Mike' Because pymongo converts each bson string to a python Unicode string instead of a regular Str.
For more information about Python Unicode strings, refer to here.
Batch insert
To make the query more interesting, we insert several more files. In addition to inserting a single file, you can alsoinsert()
The method is used to input iteratable objects as the first parameter for batch insertion. This inserts every file in the replace representation and sends only one command to the server:
>>> new_posts = [{"author": "Mike",... "text": "Another post!",... "tags": ["bulk", "insert"],... "date": datetime.datetime(2009, 11, 12, 11, 14)},... {"author": "Eliot",... "title": "MongoDB is fun",... "text": "and pretty easy too!",... "date": datetime.datetime(2009, 11, 10, 10, 45)}]>>> posts.insert(new_posts)[ObjectId(‘...‘), ObjectId(‘...‘)]
In this example, there are some interesting points:
insert()
Now two objectid instances are returned, each representing an inserted file.
new_posts[1]
Different from other posts content formats: there is no "tags" in it, and a new "title" field is added. This is the schema-free feature mentioned by MongoDB.
Query multiple files
You can usefind()
Method.find()
Returns a cursor instance, which facilitates each file that meets the query conditions. For example, to facilitate the file in each posts collection:
>>> for post in posts.find():... post...{u‘date‘: datetime.datetime(...), u‘text‘: u‘My first blog post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘mongodb‘, u‘python‘, u‘pymongo‘]}{u‘date‘: datetime.datetime(2009, 11, 12, 11, 14), u‘text‘: u‘Another post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘bulk‘, u‘insert‘]}{u‘date‘: datetime.datetime(2009, 11, 10, 10, 45), u‘text‘: u‘and pretty easy too!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Eliot‘, u‘title‘: u‘MongoDB is fun‘}
And usefind_one()
When the query results are the same, you can pass in a file to limit the query results. For example, query all the articles by "Mike:
>>> for post in posts.find({"author": "Mike"}):... post...{u‘date‘: datetime.datetime(...), u‘text‘: u‘My first blog post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘mongodb‘, u‘python‘, u‘pymongo‘]}{u‘date‘: datetime.datetime(2009, 11, 12, 11, 14), u‘text‘: u‘Another post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘bulk‘, u‘insert‘]}
Counting
You can usecount()
To complete the query. Query the total number of objects in the collection:
>>> posts.count()3
Or some specific files:
>>> posts.find({"author": "Mike"}).count()2
Query within a specified range
MongoDB supports multiple advanced queries. For example, to query a post later than a specific time, the results are sorted by the author name:
>>> d = datetime.datetime(2009, 11, 12, 12)>>> for post in posts.find({"date": {"$lt": d}}).sort("author"):... print post...{u‘date‘: datetime.datetime(2009, 11, 10, 10, 45), u‘text‘: u‘and pretty easy too!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Eliot‘, u‘title‘: u‘MongoDB is fun‘}{u‘date‘: datetime.datetime(2009, 11, 12, 11, 14), u‘text‘: u‘Another post!‘, u‘_id‘: ObjectId(‘...‘), u‘author‘: u‘Mike‘, u‘tags‘: [u‘bulk‘, u‘insert‘]}
Here we use the special "$ lt" operator to query the range and callsort()
Sort the results by authors.
Index)
To make the preceding query faster, you can add a composite index on "date" and "author. First, useexplain()
To learn how to execute a query without adding an index:
>>> posts.find({"date": {"$lt": d}}).sort("author").explain()["cursor"]u‘BasicCursor‘>>> posts.find({"date": {"$lt": d}}).sort("author").explain()["nscanned"]3
You can see that the query usesBasicCursor
And scanned all three files. Now add a composite index and take a look at the same operation:
>>> from pymongo import ASCENDING, DESCENDING>>> posts.create_index([("date", DESCENDING), ("author", ASCENDING)])u‘date_-1_author_1‘>>> posts.find({"date": {"$lt": d}}).sort("author").explain()["cursor"]u‘BtreeCursor date_-1_author_1‘>>> posts.find({"date": {"$lt": d}}).sort("author").explain()["nscanned"]2
Which of the following statements is used for query?BtreeCursor
(Using this index), and only two files that meet the conditions are scanned.
For more information about indexes, see the MongoDB documentation indexs.
OK.
If you have any mistakes, please note.
Pymongo tutorial & pymongo getting started