联系:手机/微信(+86 13429648788) QQ(107644445)
作者:惜分飞©版权所有[未经本人同意,不得以任何形式转载,否则有进一步追究法律责任的权利.]
接到一个朋友数据库故障请求case.大概操作是这样的:有一个39T的lun,通过parted分了15个分区,给oracle asm使用创建磁盘组data4,然后分了4个分区做成data5(由于ausize写错误了),删除掉磁盘组和这四个分区.然后重新分配了6个分区,并且使用最后5个分区创建了data5磁盘组.使用了一段时间之后,由于oracle空间不足,检查的时候误以为这个lun就前面15个分区使用,人工把后面的6个分区给删除了,并且创建了4个新分区,然后发现数据库crash了,发现误删除了在使用的分区.然后又把新创建的4个分区给删除了.接手该故障的时候,这个39T lun的分区信息如下
[root@node1 linux64] # parted /dev/mapper/36000d31003d39e000000000000000004 GNU Parted 2.1 Using /dev/mapper/36000d31003d39e000000000000000004 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: Linux device-mapper (multipath) (dm) Disk /dev/mapper/36000d31003d39e000000000000000004 : 39.6TB Sector size (logical /physical ): 512B /4096B Partition Table: gpt Number Start End Size File system Name Flags 1 2097kB 2000GB 2000GB asmdata4-1 2 2000GB 4000GB 2000GB asmdata4-2 3 4000GB 6000GB 2000GB asmdata4-3 4 6000GB 8000GB 2000GB asmdata4-4 5 8000GB 10.0TB 2000GB asmdata4-5 6 10.0TB 12.0TB 2000GB asmdata4-6 7 12.0TB 14.0TB 2000GB asmdata4-7 8 14.0TB 16.0TB 2000GB asmdata4-8 9 16.0TB 18.0TB 2000GB asmdata4-9 10 18.0TB 20.0TB 2000GB asmdata4-10 11 20.0TB 22.0TB 2000GB asmdata4-11 12 22.0TB 24.0TB 2000GB asmdata4-12 13 24.0TB 26.0TB 2000GB asmdata4-13 14 26.0TB 28.0TB 2000GB asmdata4-14 15 28.0TB 30.0TB 2000GB asmdata4-15 |
客户正常使用情况下,这个lun上面相关分区的asm disk信息
SQL> CREATE DISKGROUP DATA4 EXTERNAL REDUNDANCY DISK '/dev/mapper/36000d31003d39e000000000000000004p1' SIZE 1907346M , '/dev/mapper/36000d31003d39e000000000000000004p2' SIZE 1907350M , '/dev/mapper/36000d31003d39e000000000000000004p3' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p4' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p5' SIZE 1907350M ATTRIBUTE 'compatible.asm' = '11.2.0.0.0' , 'au_size' = '8M' /* ASMCA */ SQL> ALTER DISKGROUP DATA4 ADD DISK '/dev/mapper/36000d31003d39e000000000000000004p10' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p6' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p7' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p8' SIZE 1907350M , '/dev/mapper/36000d31003d39e000000000000000004p9' SIZE 1907348M /* ASMCA */ SQL> ALTER DISKGROUP DATA4 ADD DISK '/dev/mapper/36000d31003d39e000000000000000004p11' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p12' SIZE 1907350M , '/dev/mapper/36000d31003d39e000000000000000004p13' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p14' SIZE 1907348M , '/dev/mapper/36000d31003d39e000000000000000004p15' SIZE 1907350M /* ASMCA */ SQL> CREATE DISKGROUP DATA5 EXTERNAL REDUNDANCY DISK '/dev/mapper/36000d31003d39e000000000000000004p17' SIZE 1716614M , '/dev/mapper/36000d31003d39e000000000000000004p18' SIZE 1716614M , '/dev/mapper/36000d31003d39e000000000000000004p19' SIZE 1716614M , '/dev/mapper/36000d31003d39e000000000000000004p20' SIZE 1716614M , '/dev/mapper/36000d31003d39e000000000000000004p21' SIZE 1621246M ATTRIBUTE 'compatible.asm' = '11.2.0.0.0' , 'au_size' = '4M' /* ASMCA */ |
基于客户现在的情况,data4中的所有分区都正常,主要是要找出来data5中的5个分区的数据.因为客户不确定p16分区大小,导致后续的5个分区起始位置不好定位.从而使得恢复无法进行.通过shell脚本结合kfed尝试定位asm disk header信息
#!/bin/bash j=xxxxxxxxxxx for ((i=xxxxxx; i<=j; i++)) do echo "-----$i--------" >> /home/get_au1 .txt kfed read /dev/mapper/36000d31003d39e000000000000000004 aun=$i | > grep "kfdhdb.dskname: DATA" >> /home/get_au .txt done |
结果发现无法获取到结果,通过分析发现这里由于lun过大,导致aun值过大,从而使得kfed溢出无法读取到正常值.根据parted的特性,人工dd部分block进行分析
[root@node1 bak] # kfed read xifenfei.dd aun=134|more kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh. type : 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 3357988283 ; 0x00c: 0xc826d5bb kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISK ; 0x000: length=8 kfdhdb.driver.reserved[0]: 0 ; 0x008: 0x00000000 kfdhdb.driver.reserved[1]: 0 ; 0x00c: 0x00000000 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 186646528 ; 0x020: 0x0b200000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: DATA5_0000 ; 0x028: length=10 kfdhdb.grpname: DATA5 ; 0x048: length=5 kfdhdb.fgname: DATA5_0000 ; 0x068: length=10 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33116450 ; 0x0a8: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5 kfdhdb.crestmp.lo: 477378560 ; 0x0ac: USEC=0x0 MSEC=0x10e SECS=0x7 MINS=0x7 kfdhdb.mntstmp.hi: 33116450 ; 0x0b0: HOUR=0x2 DAYS=0x9 MNTH=0x4 YEAR=0x7e5 kfdhdb.mntstmp.lo: 486256640 ; 0x0b4: USEC=0x0 MSEC=0x2ec SECS=0xf MINS=0x7 kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 4194304 ; 0x0bc: 0x00400000 kfdhdb.mfact: 454272 ; 0x0c0: 0x0006ee80 kfdhdb.dsksize: 429153 ; 0x0c4: 0x00068c61 kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 |
顺利找到了data5中的第一块磁盘,而且确定了起始位置,然后构造相关的dd语句把分区的数据dd到一个新磁盘中
dd if = /dev/mapper/36000d31003d39e000000000000000004 bs=4M skip=xxxxx count=429153 of= /dev/sdu |
通过类似方法依次处理,最终把5块asm disk全部找到,并且顺利dd到新的磁盘中.尝试启动crs,并mount data5
data5 磁盘组mount成功之后,数据库顺利启动,实现lun中删除分区之后,asm磁盘组数据完美恢复
这次运气还不错,仅仅是对lun的分区使用了parted进行了删除和创建等操作,没有格式化文件系统和做成新的asm disk,不然数据会有一部分丢失.对于有部分破坏的分区,需要通过底层碎片的方法进行最大限度抢救数据.参考类似文档:
asm disk被加入vg恢复
又一例asm格式化文件系统恢复
文件系统损坏导致数据文件异常恢复
一次完美的asm disk被格式化ntfs恢复
Oracle 数据文件大小为0kb或者文件丢失恢复
再一起asm disk被格式化成ext3文件系统故障恢复
oracle asm disk格式化恢复—格式化为ext4文件系统
oracle asm disk格式化恢复—格式化为ntfs文件系统
分享oracleasm createdisk重新创建asm disk后数据0丢失恢复案例