轉載:http://www.cnblogs.com/gordonchao/archive/2010/12/13/1904606.html
現象:查看頁面,探索資料出現異常,今天產生資料比平常水平偏低好多,不大正常
原因尋找:查看記錄檔,發現有出現了幾個這樣的警告:** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
在查詢時發現好多老外遇到這個問題,這兒要說一點,老外在描述問題上很厲害,這兒把它描述的原因copy下來,和我這兒的情況差不多(當然最後他也沒得到想要的結果,不過這是另一說,非重點)
01 |
we encountered the following mnesia warning report in our system log:
|
03 |
Mnesia is overloaded: {dump_log, write_threshold}
|
05 |
The log contains several such reports within one second and then
|
09 |
* The core is one mnesia table of type disc_copies that contains
|
10 |
persistent state of all entities (concurrent processes) in our
|
11 |
system (one table row for one entity).
|
12 |
* The system consists of 20 such entities.
|
13 |
* Each entity is responsible for updating its state in the table
|
15 |
* We use mnesia:dirty_write/2, because we have no dependency
|
16 |
among tables and each entity updates its state only.
|
18 |
In the worst case, there is 20 processes that want to write to the
|
19 |
table but each to a different row. |
22 |
* What precisely does the report mean?
|
23 |
* Can we do something about it?
|
24 |
* We plan to scale from units to thousands of entities. Will this
|
25 |
be a problem? If so, how can we overcome it? If not, why not?
|
引用地址:[Q] Mnesia is overloaded
(說的很是詳細,這點上大多國人應該向老外學習滴!)這兒還是要先說一下我們的系統結構(雖然上面說的很好,我自己描述不出這麼好,但還是簡單的說一下吧):
我們這有一個模組等待接收資料,它每接收一個資料產生一個進程來進行處理它,然後往資料表裡面寫入資料,這就導致了這樣一個問題:也是出現這個錯誤的原因——頻繁的非同步寫入
錯誤解決:錯誤原因找到了,怎麼解決呢?其實這個問題在老外那兒已經發生N次了,有人提議把這個加入FAQ,但不知道何時才OK,在此之前要有臨時方案,於是我找到下面這個地方
1 |
If you’re using mnesia disc_copies tables and doing a lot of writes all at
|
2 |
once, you’ve probably run into the following message |
4 |
=ERROR REPORT==== 10-Dec-2008::18:07:19 === |
5 |
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log,
|
7 |
This warning event can get really annoying, especially when they start
|
8 |
happening every second. But you can eliminate them, or at least drastically
|
9 |
reduce their occurance. |
Synchronous Writes
1 |
The first thing to do is make sure to use sync_transaction or sync_dirty.
|
2 |
Doing synchronous writes will slow down your writes in a good way, since
|
3 |
the functions won’t return until your record(s) have been written to the
|
4 |
transaction log. The alternative, which is the default, is to do asynchronous |
5 |
writes, which can fill transaction log far faster than it gets dumped, causing
|
6 |
the above error report. |
Mnesia Application Configuration
1 |
If synchronous writes aren’t enough, the next trick is to modify 2 obscure
|
2 |
configuration parameters. The mnesia_overload event generally occurs
|
3 |
when the transaction log needs to be dumped, but the previous transaction |
4 |
log dump hasn’t finished yet. Tweaking these parameters will make the
|
5 |
transaction log dump less often, and the disc_copies tables dump to disk
|
6 |
more often. NOTE: these parameters must be set before mnesia is started; |
7 |
changing them at runtime has no effect. You can set them thru the
|
8 |
command line or in a config file. |
dc_dump_limit
1 |
This variable controls how often disc_copies tables are dumped from
|
2 |
memory. The default value is 4, which means if the size of the log is greater
|
3 |
than the size of table / 4, then a dump occurs. To make table dumps happen |
4 |
more often, increase the value. I’ve found setting this to 40 works well for
|
dump_log_write_threshold
1 |
This variable defines the maximum number of writes to the transaction log
|
2 |
before a new dump is performed. The default value is 100, so a new
|
3 |
transaction log dump is performed after every 100 writes. If you’re doing
|
4 |
hundreds or thousands of writes in a short period of time, then there’s no
|
5 |
way mnesia can keep up. I set this value to 50000, which is a huge
|
6 |
increase, but I have enough RAM to handle it. If you’re worried that this high |
7 |
value means the transaction log will rarely get dumped when there’s very
|
8 |
few writes occuring, there’s also a dump_log_time_threshold configuration
|
9 |
variable, which by default dumps the log every 3 minutes. |
How it Works
1 |
I might be wrong on the theory since I didn’t actually write or design
|
2 |
mnesia, but here’s my understanding of what’s happening. Each mnesia |
3 |
activity is recorded to a single transaction log. This transaction log then
|
4 |
gets dumped to table logs, which in turn are dumped to the table file on
|
5 |
disk. By increasing the dump_log_write_threshold, transaction log dumps
|
6 |
happen much less often, giving each dump more time to complete before the |
7 |
next dump is triggered. And increasing dc_dump_limit helps ensure that the |
8 |
table log is also dumped to disk before the next transaction dump occurs. |
引用地址:How to Eliminate Mnesia Overload Events
這兒說了兩種解決方案,一種是避免頻繁的非同步寫入,另一個是把mnesia對應的設定檔許可權放寬
1、這個哥推薦用sync_transaction 或者 sync_dirty來進行寫入操作,認為非同步寫入是導致出現這個錯誤的原因。
2、對設定檔進行修改是在啟動erlang時進行的:這哥推薦修改dc_dump_limit的設定由4改為40
修改dump_log_time_threshold 的設定由100改為50000,要想實現在啟動erl時執行
erl -mnesia dump_log_write_threshold 50000 -mnesia dc_dump_limit 40
ok,下面說下這倆參數代表的意思:
dc_dump_limit:磁碟備份表從記憶體中被拋棄的時間間隔
dump_log_time_threshold:在新記憶體回收之前的最大的寫入數(貌似翻譯的不是很准哈,你能看明白就好~_~)