Reprinted: http://www.cnblogs.com/gordonchao/archive/2010/12/13/1904606.html
Symptom: Check the page and find that the data is abnormal. The data generated today is much lower than the normal level, which is not normal.
Search for the cause: Check the log file and find a few such warnings: ** warning ** mnesia is overloaded: {dump_log, write_threshold}
I found many foreigners encountered this problem during the query. Here I would like to say that foreigners are very good at describing the problem. Here I will copy the reason it described, it's similar to my situation here (of course he didn't get the desired result at the end, but this is another remark, not important)
01 |
we encountered the following mnesia warning report in our system log:
|
03 |
Mnesia is overloaded: {dump_log, write_threshold}
|
05 |
The log contains several such reports within one second and then
|
09 |
* The core is one mnesia table of type disc_copies that contains
|
10 |
persistent state of all entities (concurrent processes) in our
|
11 |
system (one table row for one entity).
|
12 |
* The system consists of 20 such entities.
|
13 |
* Each entity is responsible for updating its state in the table
|
15 |
* We use mnesia:dirty_write/2, because we have no dependency
|
16 |
among tables and each entity updates its state only.
|
18 |
In the worst case, there is 20 processes that want to write to the
|
19 |
table but each to a different row. |
22 |
* What precisely does the report mean?
|
23 |
* Can we do something about it?
|
24 |
* We plan to scale from units to thousands of entities. Will this
|
25 |
be a problem? If so, how can we overcome it? If not, why not?
|
Reference address: [Q] mnesia is overloaded
(This is very detailed. Many Chinese people should learn from foreigners !) Here we should first talk about our system structure (although the above is very good, I cannot describe it so well, but let's simply say it ):
We have a module waiting to receive data. Each time it receives a data, it generates a process to process it and then writes data to the data table, which leads to the following problem: this error is also caused by frequent asynchronous writes.
Solution: the error cause is found. How can this problem be solved? In fact, this problem has happened n times in a foreigner. Someone suggested to add this question to the FAQ, but I don't know when it will be OK. A temporary solution is required before that, so I found the following place.
1 |
If you’re using mnesia disc_copies tables and doing a lot of writes all at
|
2 |
once, you’ve probably run into the following message |
4 |
=ERROR REPORT==== 10-Dec-2008::18:07:19 === |
5 |
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log,
|
7 |
This warning event can get really annoying, especially when they start
|
8 |
happening every second. But you can eliminate them, or at least drastically
|
9 |
reduce their occurance. |
Synchronous writes
1 |
The first thing to do is make sure to use sync_transaction or sync_dirty.
|
2 |
Doing synchronous writes will slow down your writes in a good way, since
|
3 |
the functions won’t return until your record(s) have been written to the
|
4 |
transaction log. The alternative, which is the default, is to do asynchronous |
5 |
writes, which can fill transaction log far faster than it gets dumped, causing
|
6 |
the above error report. |
Mnesia application configuration
1 |
If synchronous writes aren’t enough, the next trick is to modify 2 obscure
|
2 |
configuration parameters. The mnesia_overload event generally occurs
|
3 |
when the transaction log needs to be dumped, but the previous transaction |
4 |
log dump hasn’t finished yet. Tweaking these parameters will make the
|
5 |
transaction log dump less often, and the disc_copies tables dump to disk
|
6 |
more often. NOTE: these parameters must be set before mnesia is started; |
7 |
changing them at runtime has no effect. You can set them thru the
|
8 |
command line or in a config file. |
Dc_dump_limit
1 |
This variable controls how often disc_copies tables are dumped from
|
2 |
memory. The default value is 4, which means if the size of the log is greater
|
3 |
than the size of table / 4, then a dump occurs. To make table dumps happen |
4 |
more often, increase the value. I’ve found setting this to 40 works well for
|
Dump_log_write_threshold
1 |
This variable defines the maximum number of writes to the transaction log
|
2 |
before a new dump is performed. The default value is 100, so a new
|
3 |
transaction log dump is performed after every 100 writes. If you’re doing
|
4 |
hundreds or thousands of writes in a short period of time, then there’s no
|
5 |
way mnesia can keep up. I set this value to 50000, which is a huge
|
6 |
increase, but I have enough RAM to handle it. If you’re worried that this high |
7 |
value means the transaction log will rarely get dumped when there’s very
|
8 |
few writes occuring, there’s also a dump_log_time_threshold configuration
|
9 |
variable, which by default dumps the log every 3 minutes. |
How it works
1 |
I might be wrong on the theory since I didn’t actually write or design
|
2 |
mnesia, but here’s my understanding of what’s happening. Each mnesia |
3 |
activity is recorded to a single transaction log. This transaction log then
|
4 |
gets dumped to table logs, which in turn are dumped to the table file on
|
5 |
disk. By increasing the dump_log_write_threshold, transaction log dumps
|
6 |
happen much less often, giving each dump more time to complete before the |
7 |
next dump is triggered. And increasing dc_dump_limit helps ensure that the |
8 |
table log is also dumped to disk before the next transaction dump occurs. |
Reference address: how to eliminate mnesia overload events
Two solutions are provided here. One is to avoid frequent asynchronous writes, and the other is to relax the configuration file permissions corresponding to mnesia.
1. We recommend that you use sync_transaction or sync_dirty to write data. This error is caused by asynchronous write.
2. modify the configuration file when starting Erlang: This brother recommends that you change the dc_dump_limit setting from 4 to 40.
Modify the setting of dump_log_time_threshold from 100 to 50000.
Erl-mnesia dump_log_write_threshold 50000-mnesia dc_dump_limit 40
OK. The meanings of these two parameters are as follows:
Dc_dump_limit: interval at which the disk backup table is discarded from memory
Dump_log_time_threshold: the maximum number of writes before the new garbage collection (it seems that the translation is not accurate. You can understand it ~ _~)