We can see that the widely criticized global locks have been removed in this version, replaced by db-level locks, and collection-level locks are not far off.
Here's a look at some of the new features of the 2.2 version:
1. Concurrent Performance Enhancements
As mentioned above, the http://www.aliyun.com/zixun/aggregation/13461.html ">mongodb version 2.2 no longer has a global lock over the entire daemon, but it reduces the granularity of the lock to the db level." And according to MongoDB CEO Dwight Merriman said, this time, although no step to change the lock granularity to the collection level, but from the global lock to DB lock This step, has completed the lock granularity of the hardest part of the work, Believe that the collection level of reading and writing lock will soon come.
In addition to reducing the lock granularity, MongoDB has also made some enhancements to the lock suppression function in version 2.0, introducing the Pagefaultexception architecture to determine the lock suppression.
Interested friends can take a look at this speech and ppt:http://www.10gen.com/presentations/concurrency-internals-mongodb-2-2
2. The new statistical framework
Statistical operations have not been MONGODB strengths, this version of the MongoDB for the statistical work of ease of use has been improved. In the new statistical framework, users do not need to use the MapReduce method to do data statistics, but use the statistical framework to provide and various user-friendly functions to achieve. This article in the Nosqlfan before the introduction, see: http://blog.nosqlfan.com/html/3648.html
Documents: Aggregation Framework
Reference: Aggregation Framework Reference
Example: Aggregation Framework examples
3.Tag Aware Sharding
In version 2.2, you can artificially control the fragmentation of the data so that the data can be placed on the right segment of the node (this is the appropriate, usually close to the application layer where the data is used). The concrete approach is to mark the fragment node by tag, and then sharding key to the label. For example, when we set the Sharding key in the range [A, b], the data needs to be placed in the node of tag Beijing. Before [B, c], the data is placed in the node with tag Tianjin. Then we can play different tag on different fragment nodes. This corresponds to the sharding key in a certain range of data will be stored under our control to the specified fragment node.
We know that the MongoDB capped collection, it keeps the data in a certain size and number of the range, in the collection size is excessive, the use of removing old data in the way of recycling space. Capped collection is heavily used in log and queue systems with high performance, but its flexibility has been low.
In version 2.2, MongoDB also introduced the TTL collection (TTL = Time to Live), and you can specify how long to index a field and then delete the corresponding record. The field that is indexed must be of type date. This allows us to flexibly control the expiration of the data and to store and manage temporary data more easily.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.