PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 16191.
Fri Jul 05 14:18:27 2013
Error 1031 received logging on to the standby
PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031.
Fri Jul 05 14:22:09 2013
在解决这个问题的时候,尝试过很多办法;老师讲的,网上查的;都不管用;后来自己摸索出了一条方法,错误的核心不是主库与备库的密码文件不一致吗,而且2个节点中,有一个节点能正常归档,另外一个就是死活归档不了,即使重新修改了密码,也是照样不可以;郁闷至极;
后来,灵机一动,既然搞一致,那我将节点中能常常归档的那个密码文件,复制一份给另外一个节点,在复制一份给dataguard节点,结果我终于发泄了.
Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:23:30 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:24:30 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:25:30 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:26:31 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:27:32 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:28:32 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:29:32 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:30:33 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:31:33 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:32:35 2013 Error 1031 received logging on to the standby PING[ARC2]: Heartbeat failed to connect to standby 'phydb'. Error is 1031. Fri Jul 05 14:34:50 2013 ****************************************************************** LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2 ****************************************************************** LGWR: Standby redo logfile selected for thread 1 sequence 83 for destination LOG_ARCHIVE_DEST_2 Thread 1 advanced to log sequence 83 (LGWR switch) Current log# 1 seq# 83 mem# 0: +DATA/devdb/onlinelog/group_1.261.819703681 Current log# 1 seq# 83 mem# 1: +FLASH/devdb/onlinelog/group_1.257.819703683 Fri Jul 05 14:35:01 2013 Archived Log entry 191 added for thread 1 sequence 82 ID 0x2b2dcaf4 dest 1:
Post SMON to start 1st pass IR Fix write in gcs resources Reconfiguration complete Beginning instance recovery of 1 threads parallel recovery started with 2 processes Started redo scan Fri Jul 05 14:40:09 2013 Completed redo scan read 367 KB redo, 86 data blocks need recovery Started redo application at Thread 1: logseq 82, block 7170 Recovery of Online Redo Log: Thread 1 Group 2 Seq 82 Reading mem 0 Mem# 0: +DATA/devdb/onlinelog/group_2.262.819703685 Mem# 1: +FLASH/devdb/onlinelog/group_2.258.819703687 Recovery of Online Redo Log: Thread 1 Group 1 Seq 83 Reading mem 0 Mem# 0: +DATA/devdb/onlinelog/group_1.261.819703681 Mem# 1: +FLASH/devdb/onlinelog/group_1.257.819703683 Completed redo application of 0.07MB Completed instance recovery at Thread 1: logseq 83, block 378, scn 1811630 80 data blocks read, 88 data blocks written, 367 redo k-bytes read Fri Jul 05 14:40:17 2013 Setting recovery pair for thread 1: nab 378 seq 83 Fri Jul 05 14:40:19 2013 Thread 1 advanced to log sequence 84 (thread recovery) Redo thread 1 internally disabled at seq 84 (SMON) Fri Jul 05 14:40:24 2013 minact-scn: master continuing after IR Fri Jul 05 14:40:27 2013 ****************************************************************** LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2 ****************************************************************** Thread 2 advanced to log sequence 82 (LGWR switch) Current log# 4 seq# 82 mem# 0: +DATA/devdb/onlinelog/group_4.267.819704185 Current log# 4 seq# 82 mem# 1: +FLASH/devdb/onlinelog/group_4.260.819704189 Fri Jul 05 14:40:29 2013 Archived Log entry 193 added for thread 1 sequence 83 ID 0x2b2dcaf4 dest 1: Fri Jul 05 14:40:30 2013 Archived Log entry 194 added for thread 2 sequence 81 ID 0x2b2dcaf4 dest 1: Fri Jul 05 14:40:30 2013 ARC3: Archiving disabled thread 1 sequence 84 Archived Log entry 195 added for thread 1 sequence 84 ID 0x2b2dcaf4 dest 1: Fri Jul 05 14:42:00 2013 ARC0: Archive log rejected (thread 1 sequence 83) at host 'phydb' FAL[server, ARC0]: FAL archive failed, see trace file. ARCH: FAL archive failed. Archiver continuing ORACLE Instance devdb2 - Archival Error. Archiver continuing. Fri Jul 05 14:45:03 2013 IPC Send timeout detected. Sender: ospid 9998 [oracle@node2.localdomain (PING)] Receiver: inst 1 binc 429840056 ospid 4733 Fri Jul 05 14:47:27 2013 db_recovery_file_dest_size of 4347 MB is 8.86% used. This is a user-specified limit on the amount of space that will be used by this database for recovery-related files, and does not reflect the amount of space available in the underlying filesystem or ASM diskgroup.