MONGODB Note 08

Source: Internet
Author: User
Tags compact time interval touch command mongodump

Understand the dynamics of your app

1. Understand the operation in progress: Db.currentop (), you can filter the conditions to show only the results that meet the criteria.

1). Look for problematic actions: Db.currentop () The most common operation is to find a slower operation

2). Abort the execution of the operation: Opid the operation as a parameter, and execute Db.killop () to terminate the operation's execution. Not all operations can be terminated. In general, only the process of surrendering a lock can be terminated.

3). Illusion: You may find some long-running internal operations when looking for which operations take too long. All local.oplog.rs long-running requests, as well as all write-back listening commands, can be ignored.

4). Avoid Phantom operation: the best way to prevent a ghost from writing is to use an answer write, which means that each write will wait until the last write operation is complete.

2. Using the System Analyzer: You can use the System Profiler to find long-consuming operations. The System Analyzer records the operations in the Special collection System.profile, but results in overall performance degradation, so it should be turned on only when needed.

You can run Db.setprofilinglevel () in the shell to turn on the parser: the Db.setprofilinglevel (2) level is 2, which means "All content is logged by the parser" and the performance penalty is large.

The parser level can be set to 1, which shows only long time-consuming operations. Level 1 The default record is longer than 100ms. You can also customize the "time-consuming" criterion, which takes the time-consuming value as the second parameter of the function. Db.setprofilinglevel (1,500)

Set the analysis level to 0 to turn off the parser. In general, do not set the value of the time-consuming value slows too small. Even if the parser is turned off, Slowns also has an impact on mongod, because it determines which operations will be taken as a lengthy operation

be recorded in the log.

3. Calculate the space consumption:

1). Documentation: Using the Object.bsonsize () function: Object.bsonsize ({_id:objectid ()}) Object.bsonsize (Db.users.findOne ())

2). Collection: The stats function can be used to display information for a collection: Db.boards.stats ()

3). Database: Db.stats () It is very slow to list database information on a busy system, and it can hinder other operations and should avoid such operations.

4. Using Mongotop and Mongostat

1). mongotop similar to the top tool in Unix, you can summarize the busiest of that collection. By running Mongotop-locks, you know the status of each database.

2). Mongostat provides information about the server. Mongostat The default output is a list of current states per second, which can be passed into the parameter change interval at the command line. Each field will give the number of occurrences of the corresponding activity since the last time it was output.

3). Mongostat is a good choice if you want to take a snapshot of the operations that are taking place in the database, but MMS-like tools may be more appropriate if you want to monitor your database for long periods of time.

Data management

1. Configure authentication

1). Authentication Fundamentals: admin (Admin) and local (locally) are two special databases in which users can operate on any database. Users in these two databases can be viewed as super users. After authentication, the administrator user can access any database

Read and write, while performing certain commands that only administrators can execute, such as listdatabases and shutdown.

When creating a user with AddUser in the shell, the third parameter, ReadOnly, is set to True to create a read-only permission user.

2). Configure authentication: After authentication is enabled, the client must log on to read and write. However, it is worth noting in MongoDB that the "local" client on the server can read and write to the database before the user is established in the admin database.

Normally creating a new administrator user for authentication can avoid problems, but the Shard exception is because the database admin is saved to the configuration server. You can configure the network to allow only clients to access the MONGOs process, or add an administrator account to the Shard.

3). How Authentication works: Users in the database are saved as documents in the System.users collection. The document structure used to hold user information is {User:username,readonly:true,pwd:password hash}. where password hash

Is the hash value generated based on the username and password.

2. Create and delete indexes: Creating and deleting indexes is one of the most resource-intensive operations of the database, so you should carefully schedule indexing

1). Indexing on a standalone server: Db.foo.ensureIndex ({"Somefield": 1},{"Background": true})

2). Index on the replica set:

3). Indexing on a shard cluster: Indexing on a shard cluster is the same as the steps for indexing in a replica set, but it needs to be set up once on each shard. First, turn off the equalizer, then follow the steps to index the replica set, and then, in turn, operate on each shard,

That is, each shard is treated as a separate replica set. Finally, run the Ensureindex through MONGOs and restart the equalizer.

This is only necessary if you are adding an index to an existing shard, and the new Shard fetches the index of the collection as it begins to receive the collection data block.

4). Delete index: Use the dropindexes command and set the index name to delete the index: Db.runcommand ({"dropindexes": Foo, "index": "Alphabet"}). All indexes on a collection are deleted when the index name is "*", but this method cannot delete the "_id" index.

The index can be deleted only if the entire collection is deleted. Deleting all documents in the collection does not affect the index, and the index can still grow normally after the new document has been inserted.

5). Note Memory overflow killer (OOM Killer): Linux's memory overflow killer is responsible for terminating processes that use excessive memory. Given the way MongoDB uses memory, it typically does not experience this problem except in the case of indexing. If the mongod suddenly disappears when the index is built, check

The/var/log/messages file, which records the output information of the Oom Killer. Creating an index in the background or increasing the swap space avoids such situations. If you have customary permissions on the machine, you can set MongoDB to not be terminated by Oom killer.

3. Preheat data:

1). Move the database to memory: If you need to move the database into memory, you can use the DD tool in UNIX to load the data file before starting Mongod. Replace/data/db/brains.* with/data/db/* to load the data catalog into memory. If you load a database or a group of databases into

Memory, the memory needs to be larger than the actual memory, then some of the data will be immediately erased out of memory.

When Mongod is started, the data file is requested from the operating system. If the operating system discovers that the data files already exist in memory, you can access Mongod immediately.

2). Move the collection to memory: MongoDB provides touch commands to warm up the data. Start Mongod (perhaps on another port, or turn off the firewall restrictions on it), and use the Touch command on a collection to load it into memory:

Db.runcommand ({"Touch": "Logs", "Data": True, "index": true}) loads all the documents and indexes in the logs collection into memory. You can specify a value to load the document or just load the index. Note the memory usage.

3). Custom preheating:

A. Loading a specific index

B. Loading a recently updated document

C. Documenting recently created documents: Document queries with the timestamp of the most recently created document. A timestamp is included in the Objectid.

D. Replay app usage records: MongoDB provides a function called diagnostic log to record and replay operation streams. Starting the diagnostic log can result in a performance penalty, so it's best to get a "representative operation Flow" in a temporary way

4. Compress the data: to eliminate empty segments and to efficiently reorganize the collection, use the compact command: Db.runcommand ({"Compact": "Collname"})

The compress operation consumes a lot of resources and should not be mongod to the client when it is scheduled for compression. The recommended approach is similar to indexing, which is to compress the data in each backup node and then close the master node for the final compression operation.

Compression arranges the document as closely as possible, and the interval parameter between documents defaults to 1. If you need a larger interval parameter, you can specify it with additional parameters: Db.runcommand ({"Compact": "Collname", "Paddingfactor": 1.5}), The maximum value is 4

You can reclaim disk space by using the repair command. The repair command copies all data, so you must have the same amount of free disk space as the current data file. Use the--repair option when starting Mongod (if required, you can also use the--repairepath

option) to fix. You can call Db.repairdatabase () in the Shell to repair the database.

5. Move the collection:

1). You can use the renamecollection command to rename the collection: Db.sourceColl.renameCollection ("NewName"), which can be passed to the second parameter when executing this command. This determines what to do when a collection named NewName is present. True Will

Deleting a collection named NewName, False, throws an error.

2). To move a collection between databases, you must do a dump and restore operation, or manually copy the documents in the collection.

Clonecollection move a collection to a different mongod: Db.runcommond ({"Clonecollection": Collname, "from": "Hostname": 27017})

6. Pre-allocated data data files:

Durability

1. If the disk and software are functioning properly, MONGODB can ensure the integrity of the data after a system crash or forced shutdown.

2. Purpose of the log system: MongoDB creates a log at the time of writing, and the journal contains the disk address and bytes of the specific change to the write operation. Therefore, once the server is suddenly down, the log can be replayed at startup to re-execute

There are no write operations that can flush to disk before the outage. The data file is refreshed every 60 seconds by default, so the log file value records about 60 seconds of write content. The journaling system allocates a number of spatial files for this purpose, which are stored in the/data/db/journal directory.
The file name is _j.0,_j.1 and so on. The database shutdown log file is cleared. In the event of a crash, Mongod will replay the log file at startup and display a large amount of checksum information.

1). Bulk Commit Write operations: MongoDB writes these operations to the journal every 100 milliseconds by default, or when the write data reaches several megabytes. This means that MongoDB commits the changes in batches, that is, each write is not flushed to disk immediately. However, the default setting

, a maximum of 100 milliseconds of writes are lost when the system crashes. You can ensure the success of the write operation by passing the J option to the Getlasteror. After setting this option, wait approximately 30 milliseconds. For some important data, you can use this option.

2). Set the commit interval: Run the setparameter command, set the value of Journalcommitinterval (minimum 2 milliseconds, maximum 500 milliseconds). The GetLastError command with "J": True will reduce the value to the original One-third regardless of the time interval set.

2. Turn off the journaling system: if the value of the write data is less than the loss of write speed, we may disable the journaling system. If you want to continue working after the system crashes, there are several ways to do this:

1). Replace data file

2). Repair data file: Mongod comes with two kinds of repair tools, one is mongod built-in, one is mongodump built-in. Mongodump's repair is closer to the bottom. --repair.

3). mongod.lock file: Abnormal exit will have Mongod.lock file, prevent Mongod restart, flag we need to repair data first. You can also restart Mongod by deleting the file (production environment is not recommended).

4). Concealed Abnormal exit:

3. MongoDB does not guarantee the package:

1). Verify data corruption: You can use the Validate command to verify that the collection is corrupted: Db.foo.validate ()

4. Persistence of replica sets:

MONGODB Note 08

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.