** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}

來源:互聯網
上載者:User

轉載:http://www.cnblogs.com/gordonchao/archive/2010/12/13/1904606.html

現象:查看頁面,探索資料出現異常,今天產生資料比平常水平偏低好多,不大正常

原因尋找:查看記錄檔,發現有出現了幾個這樣的警告:** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}

在查詢時發現好多老外遇到這個問題,這兒要說一點,老外在描述問題上很厲害,這兒把它描述的原因copy下來,和我這兒的情況差不多(當然最後他也沒得到想要的結果,不過這是另一說,非重點)

01 we encountered the following mnesia warning report in our system log:
02  
03 Mnesia is overloaded: {dump_log, write_threshold}
04  
05 The log contains several such reports within one second and then
06 nothing for a while.
07  
08 Our setup:
09   * The core is one mnesia table of type disc_copies that contains
10     persistent state of all entities (concurrent processes) in our
11     system (one table row for one entity).
12   * The system consists of 20 such entities.
13   * Each entity is responsible for updating its state in the table
14     whenever it changes.
15   * We use mnesia:dirty_write/2, because we have no dependency
16     among tables and each entity updates its state only.
17  
18 In the worst case, there is 20 processes that want to write to the
19 table but each to a different row.
20  
21 Our questions:
22   * What precisely does the report mean?
23   * Can we do something about it?
24   * We plan to scale from units to thousands of entities. Will this
25     be a problem? If so, how can we overcome it? If not, why not?

引用地址:[Q] Mnesia is overloaded

(說的很是詳細,這點上大多國人應該向老外學習滴!)這兒還是要先說一下我們的系統結構(雖然上面說的很好,我自己描述不出這麼好,但還是簡單的說一下吧):

我們這有一個模組等待接收資料,它每接收一個資料產生一個進程來進行處理它,然後往資料表裡面寫入資料,這就導致了這樣一個問題:也是出現這個錯誤的原因——頻繁的非同步寫入

錯誤解決:錯誤原因找到了,怎麼解決呢?其實這個問題在老外那兒已經發生N次了,有人提議把這個加入FAQ,但不知道何時才OK,在此之前要有臨時方案,於是我找到下面這個地方

1 If you’re using mnesia disc_copies tables and doing a lot of writes all at
2 once, you’ve probably run into the following message
3  
4 =ERROR REPORT==== 10-Dec-2008::18:07:19 ===
5 Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log,
6 write_threshold}
7 This warning event can get really annoying, especially when they start
8 happening every second. But you can eliminate them, or at least drastically
9 reduce their occurance.

Synchronous Writes

1 The first thing to do is make sure to use sync_transaction or sync_dirty.
2 Doing synchronous writes will slow down your writes in a good way, since
3 the functions won’t return until your record(s) have been written to the
4 transaction log. The alternative, which is the default, is to do asynchronous
5  writes, which can fill transaction log far faster than it gets dumped, causing
6 the above error report.

Mnesia Application Configuration

1 If synchronous writes aren’t enough, the next trick is to modify 2 obscure
2 configuration parameters. The mnesia_overload event generally occurs
3 when the transaction log needs to be dumped, but the previous transaction
4  log dump hasn’t finished yet. Tweaking these parameters will make the
5 transaction log dump less often, and the disc_copies tables dump to disk
6 more often. NOTE: these parameters must be set before mnesia is started;
7  changing them at runtime has no effect. You can set them thru the
8 command line or in a config file.

dc_dump_limit

1 This variable controls how often disc_copies tables are dumped from
2 memory. The default value is 4, which means if the size of the log is greater
3 than the size of table / 4, then a dump occurs. To make table dumps happen
4  more often, increase the value. I’ve found setting this to 40 works well for
5 my purposes.

dump_log_write_threshold

1 This variable defines the maximum number of writes to the transaction log
2 before a new dump is performed. The default value is 100, so a new
3 transaction log dump is performed after every 100 writes. If you’re doing
4 hundreds or thousands of writes in a short period of time, then there’s no
5 way mnesia can keep up. I set this value to 50000, which is a huge
6 increase, but I have enough RAM to handle it. If you’re worried that this high
7  value means the transaction log will rarely get dumped when there’s very
8 few writes occuring, there’s also a dump_log_time_threshold configuration
9 variable, which by default dumps the log every 3 minutes.

How it Works

1 I might be wrong on the theory since I didn’t actually write or design
2 mnesia, but here’s my understanding of what’s happening. Each mnesia
3  activity is recorded to a single transaction log. This transaction log then
4 gets dumped to table logs, which in turn are dumped to the table file on
5 disk. By increasing the dump_log_write_threshold, transaction log dumps
6 happen much less often, giving each dump more time to complete before the
7  next dump is triggered. And increasing dc_dump_limit helps ensure that the
8  table log is also dumped to disk before the next transaction dump occurs.

引用地址:How to Eliminate Mnesia Overload Events

這兒說了兩種解決方案,一種是避免頻繁的非同步寫入,另一個是把mnesia對應的設定檔許可權放寬

1、這個哥推薦用sync_transaction 或者 sync_dirty來進行寫入操作,認為非同步寫入是導致出現這個錯誤的原因。

2、對設定檔進行修改是在啟動erlang時進行的:這哥推薦修改dc_dump_limit的設定由4改為40 

修改dump_log_time_threshold 的設定由100改為50000,要想實現在啟動erl時執行

erl -mnesia dump_log_write_threshold 50000 -mnesia dc_dump_limit 40

ok,下面說下這倆參數代表的意思:

dc_dump_limit:磁碟備份表從記憶體中被拋棄的時間間隔

dump_log_time_threshold:在新記憶體回收之前的最大的寫入數(貌似翻譯的不是很准哈,你能看明白就好~_~)

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.