Mongodb usage Summary

Source: Internet
Author: User
The premise of the following discussion is that the data to be saved is larger than the memory capacity. Otherwise, you can ignore it... The space occupied by indexes sometimes exceeds your imagination. Even if there is only one default index _ id and 0.1 billion records in the collection, the index occupies more than 5 GB. If the memory is insufficient to store the index, add the memory ~~~ Query or update speed,

The premise of the following discussion is that the data to be saved is larger than the memory capacity. Otherwise, you can ignore it... The space occupied by indexes sometimes exceeds your imagination. Even if there is only one default index _ id and 0.1 billion records in the collection, the index occupies more than 5 GB. If the memory is insufficient to store the index, add the memory ~~~ Query or update speed,

The premise of the following discussion is that the data to be saved is larger than the memory capacity. Otherwise, you can ignore it...

The space occupied by indexes sometimes exceeds your imagination. Even if there is only one default index _ id and 0.1 billion records in the collection, the index occupies more than 5 GB. If the memory is insufficient to store the index, add the memory ~~~

The query or update speed is sometimes not highly correlated with the number of documents. What I see most on the internet is that the speed of mongodb is related to the number of documents, and there are not many people talking about the relationship with the space occupied by documents. I tried to add million documents on a 16 GB machine, and the index occupied less than MB. The size of each document is about 50 kb, and the disk space occupied by million data records is about 50 GB. Then, the update operation will not change the size of the original record for a random update or documents such as find_one. Originally, mongodb should fly like a rocket. The results were unexpected and the speed was slow. It could only update about 50 records per second, which were queried through iostat or mongostat, you will find that the disk is as crazy as it is, and it seems that you can't stop it. In sqlite, the query speed is faster than million records. Why is it so slow? This is also true for mongodb, but it may not be as fast as self-written programs? Really?

Understanding some B-tree Knowledge is good for using mongodb or other relational databases. But the index is not omnipotent. Don't think that mongodb will fly like a rocket if you make full use of the index. Sometimes it will be slower than snail crawling. The order of data stored on the disk and the relationship between indexes should be fully considered according to business needs, and the index should be properly designed. When I was learning sqlserver, the book said that the primary key is very important, because the order in which data is stored on the disk is in the order of the primary key, as if a Xinhua Dictionary, the words in the book are saved in the pinyin order. Although we can also query a word by the beginning of the topic, we need to obtain Chinese characters starting with ", it is much faster than getting all the "tokens.

If the hotspot data is in the memory, the query and update operations are very fast, with hundreds of millions of data records. A single instance is not sharded, and each wonderful solution can process thousands of queries or update operations. Otherwise, your disk will not stop, and it will be very slow. Even if the index is fully used, because the data is not in the memory, the operating system needs to detach some data to free up the memory space (if the memory is insufficient) to map the data on the disk. In this process, the disk will go crazy.

Mongodb memory management is handed over to the operating system. Even if mongodb is restarted, the system may not immediately release the system cache. At this time, if the hotspot data has not been uninstalled by the system, the query speed is still very fast. This often gives people the illusion that mongodb is very fast...

In short, how much memory is allocated to mongodb when the index and hotspot data are large. If data is mainly stored, mongodb can insert data as long as the disk is large enough.

Last: good luck...

Original article address: Mongodb usage summary. Thank you for sharing it with the original author.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.