VC Connect SQLite Database

Source: Internet
Author: User
Tags sqlite database types of tables

Mnesia in the process of operating data frequently may error: * * WARNING * * Mnesia is overloaded: {dump_log, write_threshold}, you can see that the Mnesia should be overloaded. This warning occurs in the Mnesia dump operation, which can occur with the table type Disc_only_copies and disc_copies.

How to reproduce the problem, the example scenario is that multiple processes are constantly MNESIA:DIRTY_WRITE/2

Mnesia Overload Analysis

1. Throw a warning when the mnesia increases the dump worker

Mnesia_controller.erl


The warning is thrown when the worker's #dump_log.opt_reply_to is undefined, look closely at the code here, this step first checks the worker in Dumper_queue
Therefore, the Mnesia throws an overload warning with 2 conditions:
1) When the worker's #dump_log.opt_reply_to is undefined
2) worker with the same operation (Initby) Dumper_queue

2. What kind of worker's #dump_log.opt_reply_to is undefined?


The code is also in Mnesia_controller.erl, where the dump_log.opt_reply_to of the add worker is undefined, and {async_dump_log, initby} is mnesia:dirty_write/ 2 of the process calls Mnesia_controller:async_dump_log (Write_threshold) generated.

That is, MNESIA:DIRTY_WRITE/2 triggers an asynchronous dump operation, and only an asynchronous dump causes Mnesia to throw an overload warning

3. Look, when will mnesia fix the worker?


The code is also Mnesia_controller.erl, and when dump is complete, Mnesia modifies the worker's dump_log.opt_reply_to and then moves out of the Dumper_queue

From the above can be concluded that the operation of MNESIA:DIRTY_WRITE/2 will trigger an asynchronous dump operation, each dump operation Mnesia will be added to the Dumper_queue queue, Mnesia by checking Dumper_ Whether the queue has the same operating worker to check for overload

Mnesia Dump Analysis

The Mnesia data store actually uses ETS and Dets, and ETS is used for ram_copies types of tables using the Ets;disc_copies table, and the data is saved to * via dump. DCD (disc copy data) file to persist, may be used in the middle of *. DCL (disc copy log) dumps, while the Disc_only_copies table uses dets, the saved file is *. Dat.

Unlike table types, the process of Mnesia recording data is different, and the process of Mnesia recording disc_copies data is discussed first.

1, Mnesia record disc_copies data has 2 processes:

1) The operation is logged to the log file LATEST.LOG, and then dump to *. DCD file, while clearing LATEST.LOG

2) Synchronize the changes to the ETS table

2. Mnesia disc_copies table Data dump process

1) Rename the log file LATEST.LOG to PREVIOUS.LOG, and then create a new empty log file LATEST.LOG
2) analyze the contents of the PREVIOUS.LOG file and write the actual modification of the Disc_copies table to *. DCL File
3) comparison *. DCL and *. The size of DCD when FileSize (*. DCL) > FileSize (*. DCD)/Dc_dump_limit, put *. The DCL record is stored to *. DCD file. Dc_dump_limit defaults to 4 and can be set by-mnesia dc_dump_limit number

3. When will Mnesia dump?

1) timed Trigger

Mnesia boot, Mnesia_controller process set timer, trigger dump

Mnesia_controller.erl:


The default value is 180000, which can be set by-mnesia Dump_log_time_threshold 300000.

2) trigger after a certain number of operations

Each time the data operation, Mnesia will call MNESIA_LOG:LOG/1 or MNESIA_LOG:SLOG/1 for logging, log once the value of Trans_log_writes_left minus 1, when this value is 0 o'clock, trigger dump

Mnesia_log.erl:


Mnesia_dumper.erl:


The default value is 1000, which can be set by-mnesia Dump_log_write_threshold 50000.

3) Manual Dump

Manually calling mnesia:dump_log/0 can force Mnesia to complete the dump, which is synchronous

Mnesia.erl:


Mnesia_controller.erl:

Solve Mnesia overload

Combined with the above analysis and then talk about Mnesia overload problem, dict_copies table write data, Mnesia will write records to the ETS table and log file LATEST.LOG, and then the timing or quantitative dump to do the persistence. The frequency of persistence can be controlled by Dump_log_write_threshold/dump_log_time_threshold. Mnesia when the dump data is dumped, the overload warning is thrown if the previous worker process dump is not completed. In this respect, the value of Dump_log_write_threshold indicates how many data operations Mnesia experienced once persisted, and the value of Dump_log_time_threshold indicates how long it takes mnesia to persist.

Let's talk about it here, why only one dumper at a time?

The process of dump is to rename the log file to PREVIOUS.LOG, then analyze PREVIOUS.LOG's data for persistence, and if there is a second dump, it will replace the first dump PREVIOUS.LOG, affecting the persistence of the first dump. So, smart you think so, why not rename to XXX.LOG, each renaming is different? In fact, if there are two dumper,mnesia that only guarantee that the second dump will work, discard the first dump data. Therefore, the data may be lost when an overload warning is Mnesia .

Here, I've done a test that modifies the Mnesia code to remove all asynchronous dumps and use a timed manual dump instead. Still the original example, found that the first dump has not completed the analysis and persistence of the log file, and the new log file has grown to 2G faster.

Dump's process at the file IO level is actually, one side in the uncontrolled append data, while the analysis of files and sequential write, this process is challenging the read and write limit of disk IO ah. So, even if you have multiple dumper now, the result will only make the CPU and the hard drive more crazy.

In addition, do not rely too much on dump_log_write_threshold/dump_log_time_threshold these two parameters, changed to be useful?

These two parameters are changed, that is, the dump frequency will be reduced, then wait for the dump data will be more, the time of the dump will be longer, in the end can not solve the problem. The meaning of these two parameters lies in the slow write speed, which avoids the data loss due to large amount of data writing. However, if every moment is high-density write, hard disk can not bear, generally to the situation, the problem should be from the data buffer and persistent design to solve, rather than want to change a database to solve.

Here's a little bit of experience to share:

1, in the Mnesia did not report overload error, do not recommend to change, adjust these parameters will affect the persistence

2. Can read Mnesia data in multiple processes, but the process of writing data is only given to a few processes to complete

Reference:

http://blog.csdn.net/mycwq/article/details/28660813

http://my.oschina.net/hncscwc/blog/161763

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.