Oracle資料恢複: 50TB ASM crash例子

來源:互聯網
上載者:User

某客戶50 TB的ASM發生故障,經過合力拯救,恢複正常,在此簡單記錄一下!實際上最後發現比我想象中的簡單的多。如下是關於該故障的詳細描述情況。
–db alert log資訊


Mon Jul  6 15:17:43 2015
Errors in file /oraclehome/admin/xxxx/udump/xxxx_ora_262606.trc:
ORA-27063: number of bytes read/written is incorrect
IBM AIX RISC System/6000 Error: 6: No such device or address
Additional information: -1
Additional information: 1048576
ORA-19502: write error on file "+DG_xxxx/xxxx/datafile/xxxx_load_1.679.847025081", blockno 3136352 (blocksize=32768)
ORA-27063: number of bytes read/written is incorrect
......
......
Errors in file /oraclehome/admin/xxxx/bdump/xxxx_lgwr_185246.trc:
ORA-00340: IO error processing online log 9 of thread 1
ORA-00345: redo log write error block 394389 count 2
ORA-00312: online log 9 thread 1: '+DG_xxxx/xxxx/onlinelog/group_9.433.789664647'
ORA-15078: ASM diskgroup was forcibly dismounted
Mon Jul  6 15:18:46 2015
LGWR: terminating instance due to error 340
Instance terminated by LGWR, pid = 185246
從db的alert log來看,是出現了IO異常,導致lgwr進程寫日誌,最後lgwr進程強行終止資料庫執行個體.很明顯,這裡我們需要分析為什麼lgwr進程無法寫日誌呢 ? 接著查看asm日誌如下:


Thu Jul  3 08:24:26 2014
NOTE: ASMB process exiting due to lack of ASM file activity
Mon Jul  6 15:18:44 2015
Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_353008.trc:
ORA-27091: unable to queue I/O
ORA-27072: File I/O error
IBM AIX RISC System/6000 Error: 6: No such device or address
Additional information: 7
Additional information: -939091968
Additional information: -1
Mon Jul  6 15:18:46 2015
NOTE: cache initiating offline of disk 32  group 1
WARNING: offlining disk 32.1115675731 (DG_xxxx_0032) with mask 0x3
NOTE: PST update: grp = 1, dsk = 32, mode = 0x6
Mon Jul  6 15:18:46 2015
ERROR: too many offline disks in PST (grp 1)
Mon Jul  6 15:18:46 2015
ERROR: PST-initiated MANDATORY DISMOUNT of group DG_xxxx

從上述資訊來看,很明顯是因為asm 磁碟組中的32號盤出現IO問題,導致磁碟組被強制offline,最後資料庫執行個體也crash。後面客戶嘗試手工mount diskgroup 發現報如下類似錯誤:


SQL> alter diskgroup datadg mount
Mon Jul  6 15:33:50 2015
Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_dbw0_275092.trc:
ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss
ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss
ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss
ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss
......
......
ORA-15066: offlining disk "DG_xxxx_0032" may result in a data loss
OR
Mon Jul  6 15:33:51 2015
Errors in file /oraclehome/product/10.2.0/admin/+ASM/bdump/+asm_b000_360654.trc:
ORA-00600: internal error code, arguments: [kfcDismount02], [], [], [], [], [], [], []
Mon Jul  6 15:33:52 2015
NOTE: cache dismounting group 1/0xDDDF2CC7 (DG_xxxx)
NOTE: dbwr not being msg'd to dismount
Mon Jul  6 15:33:52 2015

這個錯誤極有可能是某個bug,在安裝該patch 之後,最後再次嘗試mount,發現仍然報錯。不過錯誤已經發生

SQL> alter diskgroup dg_xxxx mount
Tue Jul  7 05:49:29 2015
NOTE: cache registered group DG_xxxx number=1 incarn=0x72661a1f
Tue Jul  7 05:49:31 2015
NOTE: Hbeat: instance first (grp 1)
Tue Jul  7 05:49:36 2015
NOTE: start heartbeating (grp 1)
NOTE: cache opening disk 0 of grp 1: DG_xxxx_0000 path:/dev/rhdiskpower41
NOTE: cache opening disk 1 of grp 1: DG_xxxx_0001 path:/dev/rhdiskpower42
NOTE: cache opening disk 2 of grp 1: DG_xxxx_0002 path:/dev/rhdiskpower43
NOTE: cache opening disk 3 of grp 1: DG_xxxx_0003 path:/dev/rhdiskpower44
NOTE: cache opening disk 4 of grp 1: DG_xxxx_0004 path:/dev/rhdiskpower45
......
......
NOTE: cache opening disk 33 of grp 1: DG_xxxx_0033 path:/dev/rhdiskpower15
NOTE: cache opening disk 34 of grp 1: DG_xxxx_0034 path:/dev/rhdiskpower14
NOTE: cache opening disk 35 of grp 1: DG_xxxx_0035 path:/dev/rhdiskpower13
NOTE: cache mounting (first) group 1/0x72661A1F (DG_xxxx)
NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1
NOTE: crash recovery signalled OER-15131
ERROR: ORA-15131 signalled during mount of diskgroup DG_xxxx
NOTE: cache dismounting group 1/0x72661A1F (DG_xxxx)
ERROR: diskgroup DG_xxxx was not mounted
Tue Jul  7 05:50:10 2015
Shutting down instance: further logons disabled

可以看出,Oracle ASM在mount的時候,需要進行crash recovery,其中的檢查點位置就是6295.7329;檢查trace發現檢查點所讀取的位置如下:


NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1
CE: (0x7000000107c9640)  group=1 (DG_xxxx) obj=625  blk=256 (indirect)
    hashFlags=0x0100  lid=0x0002  lruFlags=0x0000  bastCount=1
    redundancy=0x11  fileExtent=0  AUindex=0 blockIndex=0
    copy #0:  disk=32  au=1638611
BH: (0x70000001079c360)  bnum=143 type=rcv reading state=rcvRead chgSt=not modifying
    flags=0x00000000  pinmode=excl  lockmode=null  bf=0x70000001048e000
    kfbh_kfcbh.fcn_kfbh = 0.0  lowAba=0.0  highAba=0.0
    last kfcbInitSlot return code=null cpkt lnk is null
*** 2015-07-07 05:26:12.382

可以看到,oracle需要讀取32號磁碟的第1638611號AU,10g AU預設是1M,那麼這個位置大致是1.6T的樣子,實際上這個checkpoint的位置,我們很容易找到,這裡通過kfed可以直接讀取,如下:


[xxxx:/oraclehome]$ kfed read /xxx/rhdiskpower13 aun=3 blkn=0|more
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            7 ; 0x002: KFBTYP_ACDC
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0
kfbh.block.obj:                       3 ; 0x008: TYPE=0x0 NUMB=0x3
kfbh.check:                  1350450563 ; 0x00c: 0x507e3d83
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfracdc.eyec[0]:                     65 ; 0x000: 0x41
kfracdc.eyec[1]:                     67 ; 0x001: 0x43
kfracdc.eyec[2]:                     68 ; 0x002: 0x44
kfracdc.eyec[3]:                     67 ; 0x003: 0x43
kfracdc.thread:                       1 ; 0x004: 0x00000001
kfracdc.lastAba.seq:         4294967295 ; 0x008: 0xffffffff
kfracdc.lastAba.blk:         4294967295 ; 0x00c: 0xffffffff
kfracdc.blk0:                         1 ; 0x010: 0x00000001
kfracdc.blks:                     10751 ; 0x014: 0x000029ff
kfracdc.ckpt.seq:                  6295 ; 0x018: 0x00001897    ---ckpt的值
kfracdc.ckpt.blk:                  7329 ; 0x01c: 0x00001ca1
kfracdc.fcn.base:             297751371 ; 0x020: 0x11bf534b
kfracdc.fcn.wrap:                     0 ; 0x024: 0x00000000
kfracdc.bufBlks:                     64 ; 0x028: 0x00000040

最後客戶經過各種嘗試之後,仍然在mount 磁碟組的時候報如下的錯誤:


Tue Jul  7 18:03:03 2015
Errors in file /oraclehome/product/10.2.0/admin/+ASM/udump/+asm_ora_438636.trc:
ORA-00600: internal error code, arguments: [kfcChkAio01], [], [], [], [], [], [], []
ORA-15196: invalid ASM block header [kfc.c:5553] [blk_kfbl] [625] [2147483904] [2147483905 != 2147483904]
NOTE: crash recovery signalled OER-600
ERROR: ORA-600 signalled during mount of diskgroup DG_xxxx
NOTE: cache dismounting group 1/0xE70AB6D0 (DG_xxxx)
ERROR: diskgroup DG_xxxx was not mounted
Tue Jul  7 18:05:38 2015

關於這一點跟Oracle MOS文檔understanding and fixing errors ORA-600 [kfcChkAio01] and ORA-15196(Doc ID 757529.1)完全一致,因此最後我們建議客戶根據該文檔的描述,處理即可,實際上處理的方式很簡單,該文檔提供了提供shell指令碼,只需要修改其中的塊號即可。處理完畢之後,成功mount 磁碟組如下:


Tue Jul  7 18:05:38 2015
SQL> alter diskgroup dg_xxxx mount
Tue Jul  7 18:05:38 2015
NOTE: cache registered group DG_xxxx number=1 incarn=0xce0ab6d3
Tue Jul  7 18:05:38 2015
NOTE: Hbeat: instance first (grp 1)
Tue Jul  7 18:05:43 2015
NOTE: start heartbeating (grp 1)
......
......
NOTE: cache mounting (first) group 1/0xCE0AB6D3 (DG_xxxx)
NOTE: starting recovery of thread=1 ckpt=6295.7329 group=1
NOTE: advancing ckpt for thread=1 ckpt=6295.8649
NOTE: cache recovered group 1 to fcn 0.297779775
Tue Jul  7 18:05:43 2015
NOTE: opening chunk 1 at fcn 0.297779775 ABA
NOTE: seq=6296 blk=8650
Tue Jul  7 18:05:43 2015
NOTE: cache mounting group 1/0xCE0AB6D3 (DG_xxxx) succeeded
SUCCESS: diskgroup DG_xxxx was mounted
Tue Jul  7 18:05:45 2015
NOTE: recovering COD for group 1/0xce0ab6d3 (DG_xxxx)
SUCCESS: completed COD recovery for group 1/0xce0ab6d3 (DG_xxxx)
最後我們回過頭來解釋一下,為什麼會出現這樣的情況呢? 實際上,根本原因在於,客戶在之前添加磁碟的時候操作不規範,如下:


Tue May 20 15:43:26 2014
SQL> alter diskgroup dg_xxxx add disk '/xxx/rhdiskpower24' size 1677721M,......
.'/xxx/rhdiskpower16' size 1677721M, '/xxx/rhdiskpower15' size 167772
1M, '/xxx/rhdiskpower14' size 1677721M, '/xxx/rhdiskpower13' size 1677721M  rebalance power 8
Wed May 21 08:45:13 2014
Starting background process ASMB
ASMB started with pid=13, OS id=267028
Wed May 21 08:45:14 2014
NOTE: ASMB process exiting due to lack of ASM file activity
Wed May 21 12:24:34 2014
NOTE: stopping process ARB5
NOTE: stopping process ARB2
NOTE: stopping process ARB6
NOTE: stopping process ARB1
NOTE: stopping process ARB3
NOTE: stopping process ARB7
NOTE: stopping process ARB4
NOTE: stopping process ARB0
Wed May 21 12:24:38 2014
SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx)
Wed May 21 12:24:38 2014
SUCCESS: rebalance completed for group 1/0x595ad46e (DG_xxxx)
前面出問題的disk 就是第32號盤,其大小是1677721M,實際上我們檢查發現該磁碟的物理大小是1638400M。
換句話將,在添加磁碟的時候,寫了一個比較大的數值,讓Oracle以為是這麼大,然而實際上並沒有這麼大。當然,這也只能說明是Oracle 10g 版本中對於asm 的校正不夠嚴格。
所以,問題很明確,報錯的AU 編號1638611是大於 1638400的,所以這是一個不存在的位置,因此asm crash了。
 
備忘:客戶這裡asm diskgroup 一共用了36個盤,每個盤1.6TB,大約53TB,基本上全部用光了,還好能夠簡單修複之一,否則恢複難度和工作量就太大了。無可否認,雲和恩墨 依然是國內恢複實力最強的公司,沒有之一!

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.