OpenStack Ceilometer use MongoDB to solve excessive disk space problem _openstack

Source: Internet
Author: User
Tags compact mongodb

OpenStack Ceilometer use MongoDB to solve excessive disk space problem

Background: Ceilometer uses MongoDB as a database, constantly sampling, resulting in the volume of data expansion, excessive disk space consumption.

Knowledge background

1. database file type

1.1. Journal log file

Unlike some traditional databases, MongoDB log files are used only to recover memory data that has not yet synchronized to the hard disk when the system is down. The log files are stored under a separate directory. At startup, MongoDB automatically creates 3 log files per 1G (initially empty) in advance.

1.2. namespace table name file Dbname.ns

This file is used to store the entire collection of databases and the name of the index. This file is not large, the default 16M, you can store 24,000 sets or index names and those collections and indexes in the data file in the specific location. Through this file MongoDB can know where to start looking for or inserting data from the collection or indexed data.

1.3. Data file dbname.0, dbname.1,... DBNAME.N

MongoDB data and indexes are stored in one or more MongoDB data files. The first data file is named "database name. 0", such as my-db.0. The default size for this file is 64M, and MongoDB will generate the next data file such as My-db.1 before it is nearly finished with this 64M. The size of the data file is incremented by twice times. The second data file has a size of 128M and the third is 256M. It will stop after 2G and add new files to the size of this 2G.

2. Database size Parameters

2.1. DataSize

DataSize is the closest parameter to a real data size. You can use it to check how much data you have. This size includes the sum of each record of the database (or collection). Note that each record has the additional overhead of header and padding in addition to the Bson document. So the actual size will be slightly larger than the real data footprint.

2.2. Storagesize

This parameter is equal to the sum of all the data extents used by the database or a collection. Note that this number will be larger than datasize because there will be fragments left behind in the extent. If a newly inserted document is less than or equal to the size of the fragment, MongoDB will reuse the fragment to store the new document. But until then, the fragments will remain there to occupy space. For this reason, this parameter does not become smaller when you delete the document.

2.3. FileSize

This parameter is valid only on the database and refers to the size of the file used in the actual file system. It includes the sum of all the data extents, the sum of the index extent, and some unallocated space. Previously mentioned MongoDB will be pre-allocated for database file creation, for example, the minimum is 64M, even if you only have hundreds of KB of data. So this parameter might be a lot bigger than the actual data size. These additional unused spaces are used to ensure that MongoDB can quickly allocate new extent when new data is written, avoiding delays caused by disk space allocation.

Solution

1. Reduce the size of the pre-allocation (or disable pre-allocation)

From the MongoDB mechanism, you can reduce the size of the preconfigured, or disable the pre allocation. However, this scenario can affect the behavior of the database. This can be done if the database does not have frequent large data write actions.

2. Data compression

The conpact command can compress the collection to reduce the amount of data.

Db.runcommand ({compact: ' CollectionName '})

Here's what you need to be aware of:

1 when the operation is carried out, the collection of the current operation will be locked;
2 The compact command does not free up disk space, but new disk requests use the sorted space;

3. Export the data and then import it

Mongodump role is to export the database, Mongorestore is to import the exported database, this process will rebuild the index, so if the database before the delete operation, the space is not released, then after the import, the deleted space will be released.

4. Regular deletion of unused data

Combined with the actual situation, the ceilometer taken by the sample after the expiration date can be eliminated.

Steps:

1) Specify the validity period of sample by changing the time_to_live parameter of/etc/ceilometer/ceilometer.conf;
2) Run Ceilometer-expirer to delete the expired sample;
3 Restart Openstack-ceilometer-collector service;
4 after deletion, use RepairDatabase to repair the database.

Note thatthe free space required for the repairdatabase operation is the total amount of current data plus 2G. If the current disk partition space is insufficient, you can try to specify a space sufficient partition path with the –repairpath parameter.

Thank you for reading, I hope to help you, thank you for your support for this site!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.