MongoDB Data storage Engine

Source: Internet
Author: User
Tags compact terminates time interval

The storage Engine (Storage) is the core component of MongoDB, responsible for managing how data is stored on the hard disk and memory. Starting with the MongoDB 3.2 release, MongoDB supports the multi-data storage engine (Storage engines), and MongoDB supports the storage engine: Wiredtiger,mmapv1 and in-memory.

Starting with the MongoDB 3.2 release, Wiredtiger becomes the mongdb default storage Engine, which is used to persist data to a hard disk file, Wiredtiger provides concurrency control at the document level (Document-level). Features such as Checkpoint (CheckPoint), data compression, and local data encryption (Native encryption).

MongoDB not only can persist data to the hard disk file, but also can save the data only in memory; the In-memory storage engine is used to store data only in memory, storing only a small amount of metadata and diagnostic logs (Diagnostic) in the hard disk file. The In-memory storage engine significantly reduces the latency of data queries (Latency) because it does not require disk IO operations to obtain the requested data.

One, specify the storage engine for the MongoDB instance

Mongod parameter: --storageengine wiredtiger | inMemory

Specifies the type of storage engine,

    • If the parameter value is Wiredtiger, the storage engine used by MongoDB is Wiredtiger, and data persistence is stored in disk files;
    • If the parameter value is inMemory, MongoDB uses a storage engine that is in-memory, storing the data in memory;
    • Starting with the MongoDB 3.2 version, the default storage engine for MongoDB is Wiredtiger;

Second, the Wiredtiger storage engine stores data to hard disk files

Wiredtiger and MMAPV1 are used to persist data, and Wiredtiger is more powerful than MMAPv1 updates.

1, concurrency control at the document level (Document-level Concurrency control)

When MongoDB performs write operations, Wiredtiger concurrency control at the document level, meaning that at the same time, multiple writes can modify different documents in the same collection, and when multiple writes modify the same document, they must be executed in a serialized manner, which means that if the document is being modified, Other writes must wait until the write operation on the document finishes and the other writes compete with each other, and the winning write performs a modification on the document.

For most read and write operations, Wiredtiger uses optimistic concurrency control (optimistic concurrency control) to use intent locks (collection Lock) only at the global,database and Intent levels. If Wiredtiger detects a conflict in two operations, it causes MongoDB to re-execute one of the operations, which is done automatically by the system.

For most read and write operations, Wiredtiger uses optimistic concurrency control. Wiredtiger uses only intent locks at the global, database and collection levels. When the storage engine detects conflicts between the operations, one would incur a write conflict causing MongoDB to trans parently retry that operation.

2, Check Point (Checkpoint)

At the beginning of the checkpoint operation, Wiredtiger provides a database snapshot (Snapshot) at a specified point in time (Point-in-time), which presents a consistent view of the in-memory data. When writing data to disk, Wiredtiger writes all data in the snapshot in a consistent manner to the data file (disk files). Once checkpoint is successfully created, Wiredtiger guarantees that the data file and memory data are consistent, so checkpoint is the restore point (Recovery The checkpoint operation can shorten the time that MongoDB restores data from the journal log file.

When Wiredtiger creates checkpoint, MongoDB flushes the data to the data file (Disk files), and by default Wiredtiger creates checkpoint at 60s, or produces 2GB journal files. During the Wiredtiger creation of a new checkpoint, the previous checkpoint is still valid, which means that even if MongoDB encounters an error while creating a new checkpoint and terminates abnormally, as long as it restarts, MongoDB can begin to restore data from the last valid checkpoint.

When MongoDB atomically updates the Wiredtiger metadata table to refer to the new checkpoint, it indicates that the new checkpoint was created successfully, and MongoDB frees up the disk space that the old checkpoint occupies. With the Wiredtiger storage engine, MongoDB can only revert to the previous checkpoint if there are no logs to log data updates, and jounal log files must be used if you want to restore modifications that were performed after the previous checkpoint.

3, pre-recorded log (write-ahead Transaction log)

Wiredtiger uses the mechanism of the pre-write log, when the data is updated, the data is updated to the log file, and then at the beginning of the creation of the checkpoint operation, the operation recorded in the log file is flushed to the data file, that is, through the pre-write log and Checkpoint, Persist data updates to data files to achieve consistency of data. Wiredtiger log files persist records of all data updates that occurred since the last checkpoint operation, and when the MongoDB system crashes, the log file is able to restore data updates that occurred since the last checkpoint operation.

The Wiredtiger journal persists all data modifications between checkpoints. If MongoDB exits between checkpoints, it uses the journal to replay all data modified since the last checkpoint.

3, Memory usage

3.1 Wiredtiger uses system memory resources to cache two pieces of data:

    • Internal cache (Internal cache)
    • File system cache (Filesystem cache)

Starting with the MongoDB 3.2 version, the Wiredtiger internal cache usage, the default value is: 1GB or 60% of RAM-1GB, take the larger value of two value, the file system cache usage is not fixed, MongoDB automatically use the System idle memory, The memory is not used by Wiredtiger caches and other processes, and the data is compressed in the file system cache.

3.2 Resizing the Wiredtiger internal cache

Using the Mongod parameter --WIREDTIGERCACHESIZEGB to modify the size of the Wiredtiger internal cache in a MongoDB instance, the formula for calculating the internal cache size is:

    • Starting in MongoDB 3.2, the Wiredtiger internal cache, by default, would use the larger of either:60% of RAM minus 1 GB, or 1 GB.
    • For systems with up to ten GB of RAM, the new default setting is less than or equal to the 3.0 default setting
    • For systems with more than GB of RAM, the new default setting is greater than the 3.0 setting.

4, data compression (Compression)

Wiredtiger compresses the Storage collection (Collection) and index (index), compression reduces disk space consumption, but consumes additional CPU to perform data compression and decompression operations.

By default, Wiredtiger uses the block Compression algorithm to compress the collections, using the prefix compression (Prefix Compression) algorithm to compress the indexes, Journal log files are also compressed storage. For most workloads (Workload), the default compression setting is able to equalize (Balance) The efficiency of the data store and the need to process the data, that is, the compression and decompression processing speed is very high.

5,disk Space Recycling

When you delete a document or collection (collections) from MongoDB, MongoDB does not release disk space to Os,mongodb to maintain the list of empty records in the data files. When the data is reinserted, MongoDB allocates storage space from the empty records list to the new document, so there is no need to re-open the space. In order to update the effective reuse disk space, data fragmentation must be re-organized.

Wiredtiger uses the compact command to remove fragments of data and indexes from the collection (Collection) and to release the unused space, calling the syntax:

Db.runcommand ({compact: ' <collection> ')

When the compact command is executed, MongoDB locks the current database and blocks other operations. After the compact command executes, Mongod rebuilds all indexes on the collection.

On Wiredtiger, Compact Would rewrite the collection and indexes to minimize disk space by releasing unused disk space to the operating system. This was useful if you had removed a large amount of data from the collection, and does not plan to replace it.

Second, the In-memory storage engine stores the data in memory

The In-memory storage engine stores data in memory, in addition to a small amount of metadata and diagnostic (Diagnostic) logs, and the In-memory storage engine does not maintain any data stored on the hard disk (On-disk), avoiding disk IO operations, Reduce latency for data queries.

1, specifying the In-memory storage engine

Mongod--storageengine inMemory--dbpath <path>

When selecting the In-memory storage engine, you need to specify two parameters:

    • Set the Mongod parameter: --storageengine , set the value of the parameter is inMemory;
    • Set the Mongod parameter: --dbpath , set the value of the parameter is the directory of data storage;
    • Use disk to store metadata, diagnostic data, and temporary data: Although the In-memory storage engine does not write data to the file system, it needs to maintain a small amount of metadata and diagnostic (Diagnostic) logs using--dbpath, and when the large index is created, Use disk to store temporary data; Although the In-memory storage engine does not write data to the filesystem, it maintains in the--dbpath SMA ll metadata files and diagnostic data as well temporary files for building large indexes.

2, document-level concurrency (document-level concurrency)

The In-memory storage engine uses file-level concurrency control when performing write operations, meaning that at the same time, multiple writes can simultaneously modify different documents in the same collection, and when multiple writes modify the same document, they must be executed in a serialized manner; This means that if the document is being modified, Other write operations must wait.

3, Memory usage

The In-mmeory storage engine needs to store Data,index,oplog, etc. in memory, by Mongod parameter:--INMEMORYSIZEGB Setting the amount of memory consumed, the default value is 50% of RAM-1GB. Specifies the amount of memory data used by the In-memory storage engine, in GB:

Mongod--storageengine inMemory--dbpath <path>--INMEMORYSIZEGB <newSize>

4, persistent (durable)

Since the In-memory storage engine does not persist data storage, only the data is stored in memory, the read and write operations are done directly in memory, the data is not written to the disk file, so there is no need for separate log files, no logging and waiting for data persistence. When the MongoDB instance shuts down or the system terminates abnormally, all data stored in memory will be lost.

5, Record Oplog

The In-memory storage engine does not write data updates to disk, but it records oplog, which is a collection stored in memory that MongoDB pushes primary members oplog to other members of the same replica set through replication. If a MongoDB instance is a primary member of the replica set, the instance uses the In-memory storage engine to push Oplog to other members through replication, and redo the operations recorded in Oplog in other members, so that You can modify persisted storage for data that is executed in the primary member.

You can deploy Mongod instances This use In-memory storage engine as part of a replica set. For example, as part of the a Three-member replica set, you could has:

    • two mongod  instances run with in-memory storage engine.
    • one mongod  instance run With wiredtiger storage engine. Configure the Wiredtiger member as a hidden member (i.e.  hidden: true  and  Priority: 0 ).

With this deployment model, only the Mongod instances running with the In-memory Storange engine can become the P Rimary. Clients connect only to the In-memory storage engine Mongod instances. Even if both Mongod instances running In-memory Storage engine crash and restart, they can sync from the member R Unning Wiredtiger. The hidden mongod instance running with Wiredtiger persists the data to disk, including the user data, indexes, a ND replication configuration information.

Third, record the log

Data is the core of MongoDB, MongoDB must ensure that the data is secure, cannot be lost, Journal is a sequential write log file, used to record data updates that occurred after the previous checkpoint, to restore the database from the system exception termination event to a valid state. MongoDB uses a pre-write log mechanism to persist data: The Wiredtiger storage engine writes data updates to the journal file when it performs a write operation. Journal files is a log file stored on the hard disk, each Journal file is approximately 100MB, stored in the Journal subdirectory under--dbpath, and performs a checkpoint operation to synchronize the data updates to the data file.

Every time interval, the Wiredtiger storage engine performs checkpoint operations, synchronizes the cached data update log to a data file on the hard disk (on-disk files), and by default, MongoDB enables logging, or it can be explicitly enabled. You only need to use the--journal parameter when starting Mongod:

Mongod--journal

1, the process of using journal log file restore

Wiredtiger Create checkpoint, the ability to restore the MongoDB database to a consistent state when the previous checkpoint was created, if MongoDB terminates abnormally after the previous checkpoint, you must use the journal log file, Redo the data update operation that occurred after the previous checkpoint, restore the data to the consistency state of the journal Record, and use the journal log restore process:

    1. gets the identity value created by the previous checkpoint: finds the identity value (Identifier) that occurred on the previous checkpoint from the data file.
    2. Match logging based on identity value: searches for logging (record) from journal Files, looking for a log record that matches the identity value of the previous checkpoint;
    3. Redo Logging: redo all log records recorded in journal files after the previous checkpoint;
2, Cache log

MongoDB configuration Wiredtiger uses memory buffers to store journal Records, all journal Records that do not reach 128KB will be cached in the buffer until the size exceeds 128KB. When you perform a write operation, Wiredtiger stores the journal records in the buffer, and if MongoDB is abnormally shut down, Journal records stored in memory will be lost, which means that Wiredtiger will lose the maximum 128KB of data updates.

Wiredtiger syncs the buffered journal records to disk according to the following intervals or conditions:

    • new in version 3.2: every milliseconds.

    • mongodb sets checkpoints to occur in Wiredtiger on User data at a interval of seconds or when 2 GB of journal data have been written, whichever occurs first.

    • if The write operation includes a write concern OF&NBSP;j: true , Wiredtiger forces a sync of the Wiredtiger journal files.

    • because MongoDB uses a journal file size limit of over MB, Wiredtiger C Reates a new journal file approximately every MB of data. When Wiredtiger creates a new journal file, Wiredtiger syncs the previous journal file.

3, log file (Journal files)

For journal files, MongoDB creates journal subdirectories under the--dbpath directory, Wiredtiger stores journal files in that directory, each journal file is approximately 100M, and the naming format is: Wiredtigerlog.<sequence>, Sequence is a 10-digit number with a left padding of 0, starting with 0000000001, incrementing in turn.

For the Wiredtiger storage engine, the Journal file has the following characteristics:

    • Identity Logging : Each log record (record) for a journal file represents a write operation; each record has an ID that uniquely identifies the record;
    • Compress journal files : Wiredtiger Compresses the data stored in the journal file;
    • Journal file size limit : Each journal file size limit is about 100MB, once the file exceeds the limit, Wiredtiger create a new journal file;
    • automatic removal of journal files : Wiredtiger automatically removes old journal files, maintaining only the journal files necessary to restore from the previous checkpoint;
    • pre-allocated journal files : Wiredtiger pre-allocated journal files;

4. Recover data after an abnormal outage

After the MongoDB instance is unexpectedly down, the Mongod instance is restarted, MongoDB automatically redo (redo) all journal files, and the MongoDB database is inaccessible during the restoration of journal Files.

Four, mongod parameters associated with the storage engine

1, using the Wiredtiger parameter settings

----dbpath <path>--journal--WIREDTIGERCACHESIZEGB <value>--wiredtigerjournalcompressor < Compressor>--wiredtigercollectionblockcompressor <compressor>--wiredtigerindexprefixcompression < Boolean>

2, using the In-memory parameter settings

--storageengine inMemory--dbpath <path>--inmemorysizegb <newsize>--replset <setname>-- Oplogsize <value>

Reference doc:

Storage Engines

Wiredtiger Storage Engine

Journaling

In-memory Storage Engine

Compact

MongoDB Data storage Engine

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.