• Oracle彻底删除11gR2 GI


    Oracle彻底删除11gR2 GI

    环境:RHEL 6.5 + Oracle 11.2.0.4 GI
    需求:在搭建Standby RAC时,安装GI软件期间由于GI安装遇到一些问题,root脚本执行hang住,且无任何报错(跟踪/opt/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_jystdrac1.log也无任何报错,几小时都不再刷新),因为11.2后的root脚本是可以重复执行的,所以反复尝试执行root脚本,但结果均未成功。
    由于这个虚拟的系统环境是直接从很久前自己做的实验直接复制过来的,只能是怀疑环境本身有问题,现在想完全重新安装GI,在这之前需要Oracle彻底删除11g GI,这个操作可以参考MOS文档:
    How to completely remove 11.2 and 12.1 Grid Infrastructure, CRS and/or Oracle Restart - IBM: Linux on System z (文档 ID 1413787.1)
    注:我这里的实验环境由于是GI并未完全成功安装成功,所以有些命令的输出可能并不是标准输出。

    主要步骤如下:

    1.删除CRS配置

    删除CRS配置 使用root用户,执行 /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
    [root@jystdrac1 install]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
    Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
    PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
    PRCR-1068 : Failed to query resources
    Cannot communicate with crsd
    PRCR-1070 : Failed to check if resource ora.gsd is registered
    Cannot communicate with crsd
    PRCR-1070 : Failed to check if resource ora.ons is registered
    Cannot communicate with crsd
    CRS-4535: Cannot communicate with Cluster Ready Services
    CRS-4000: Command Stop failed, or completed with errors.
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jystdrac1'
    CRS-2673: Attempting to stop 'ora.crf' on 'jystdrac1'
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'jystdrac1'
    CRS-2677: Stop of 'ora.mdnsd' on 'jystdrac1' succeeded
    CRS-2677: Stop of 'ora.crf' on 'jystdrac1' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'jystdrac1'
    CRS-2677: Stop of 'ora.gipcd' on 'jystdrac1' succeeded
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'jystdrac1'
    CRS-2677: Stop of 'ora.gpnpd' on 'jystdrac1' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jystdrac1' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    Removing Trace File Analyzer
    error: package cvuqdisk is not installed
    Successfully deconfigured Oracle clusterware stack on this node
    

    在GI的最后一个节点,你需要在rootcrs.pl命令后添加–lastnode参数
    /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -lastnode -verbose -force

    [root@jystdrac2 ~]# /opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -lastnode -verbose -force
    Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
    Adding Clusterware entries to upstart
    crsexcl failed to start
    Failed to start the Clusterware. Last 20 lines of the alert log follow: 
    
    ****Unable to retrieve Oracle Clusterware home.
    Start Oracle Clusterware stack and try again.
    ****Unable to retrieve Oracle Clusterware home.
    Start Oracle Clusterware stack and try again.
    ****Unable to retrieve Oracle Clusterware home.
    Start Oracle Clusterware stack and try again.
    ****Unable to retrieve Oracle Clusterware home.
    Start Oracle Clusterware stack and try again.
    ****Unable to retrieve Oracle Clusterware home.
    Start Oracle Clusterware stack and try again.
    Failure in execution (rc=-1, 0, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop res ora.registry.acfs -n jystdrac2 -f
    Failure in execution (rc=-1, 0, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete res ora.registry.acfs -f
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    sh: /opt/app/11.2.0/grid/bin/crsctl: No such file or directory
    Failure in execution (rc=-1, 32512, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete res ora.drivers.acfs -init -f
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Failure in execution (rc=-1, 32512, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop crs -f
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    ################################################################
    # You must kill processes or reboot the system to properly #
    # cleanup the processes started by Oracle clusterware          #
    ################################################################
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/ocr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Can't exec "/opt/app/11.2.0/grid/bin/clsecho": No such file or directory at /opt/app/11.2.0/grid/lib/acfslib.pm line 1464.
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
    error: package cvuqdisk is not installed
    Successfully deconfigured Oracle clusterware stack on this node
    

    2.删除Oracle Restart

    使用root用户,执行: /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
    [root@jystdrac1 app]# /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
    Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Stop failed, or completed with errors.
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Delete failed, or completed with errors.
    CRS-4047: No Oracle Clusterware components configured.
    CRS-4000: Command Stop failed, or completed with errors.
    You must kill ohasd processes or reboot the system to properly 
    cleanup the processes started by Oracle clusterware
    ACFS-9313: No ADVM/ACFS installation detected.
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
    Successfully deconfigured Oracle Restart stack
    

    第二个节点:

    [root@jystdrac2 ~]# /opt/app/11.2.0/grid/crs/install/roothas.pl -deconfig -verbose -force
    Using configuration parameter file: /opt/app/11.2.0/grid/crs/install/crsconfig_params
    Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop resource ora.cssd -f
    Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl delete resource ora.cssd -f
    Failure in execution (rc=-1, 256, No such file or directory) for command /opt/app/11.2.0/grid/bin/crsctl stop has -f
    You must kill ohasd processes or reboot the system to properly 
    cleanup the processes started by Oracle clusterware
    Can't exec "/opt/app/11.2.0/grid/bin/clsecho": No such file or directory at /opt/app/11.2.0/grid/lib/acfslib.pm line 1464.
    Either /etc/oracle/olr.loc does not exist or is not readable
    Make sure the file exists and it has read and execute access
    Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
    Successfully deconfigured Oracle Restart stack
    

    3.修改 /etc/inittab

    修改/etc/inittab,移除相关配置信息:
    tail /etc/inittab
    #h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
    init q
    

    我这里的实验环境这个文件还没有这个内容,所以不需要做操作,继续往下。

    4.清除文件

    按照MOS的描述,清除所有存在的相关文件:
    If the Oracle Grid root.sh script has been run on any of the nodes previously, then the
    Linux inittab file should be modified to remove the lines that were added.
    Deconfig should remove this line but it is best to verify.
    
    tail /etc/inittab
    #h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
    init q
    
    Clean up files
    
    The following commands are used to remove all Oracle Grid and database
    software. You can also use the Oracle de-installer to remove the necessary software
    components.
    
    #
    #WARNING - You should verify this script before running this script as this
    #script  will remove everything for all Oracle systems on the Linux system where
    #the script  is run.
    #
    rm -f /etc/init.d/init.ohasd
    #
    rm -f /etc/inittab.crs
    rm -rf /etc/oracle
    #
    # Oracle Bug Note:429214.1
    #
    rm -f /usr/tmp/.oracle/*
    rm -f /tmp/.oracle/*
    rm -f /var/tmp/.oracle/*
    ###
    
    WARNING: BE VERY CAREFUL - THIS WILL REMOVE THE ORATAB ENTRIES FOR ALL DATABASES RUNNING ON THIS SERVER AND ALSO THE CENTRAL INVENTORY FOR ANY ORACLE HOMES/GRID HOMES WHICH ARE CURRENTLY INSTALLED ON THIS SERVER.
    
    rm -f /etc/oratab
    rm -rf /var/opt/oracle
    
    
    #
    # Remove Oracle software directories *these may change based on your install en
    #  You need to modify the following to map to your install environment.
    
    rm -rf  </u01/base/*> *********this is $ORACLE_BASE
    rm -rf  </u01/oraInventory> ****this is the central inventory loc pointed to by oraInst.loc
    rm -rf </u01/grid/*> **********this is the Grid Home
    rm -rf </u01/oracle> **********this is the DB Home#
    

    5.重新安装前的准备

    确认目录权限正确:
    mkdir -p /opt/app/ && chown -R oracle:oinstall /opt/app/ && chmod 775 /opt/app && ls -lh /opt
    

    安装cvuqdisk的rpm包

    rpm -ivh /opt/app/media/grid/rpm/cvuqdisk-1.0.9-1.rpm
    

    清除ocr/voting磁盘信息:
    dd if=/dev/zero of=/dev/ bs=1M count=100

    dd if=/dev/zero of=/dev/asm-diskb bs=1M count=100
    dd if=/dev/zero of=/dev/asm-diskc bs=1M count=100
    dd if=/dev/zero of=/dev/asm-diskd bs=1M count=100
    

    然后所有节点重启主机,准备在清空所有配置后的环境下进行一次全新的GI安装。

  • 相关阅读:
    笔记:Jersey REST API 设计
    笔记:MyBatis 日志显示-log4j2
    笔记:MyBatis 其他特性
    笔记:MyBatis 动态SQL
    笔记:MyBatis 使用 Java API配置
    【JAVA】辨析:replace,replaceAll,replaceFirst
    【学习总结】计算机网络-纠错编码之海明码or汉明码
    【学习总结】计算机网络-检错编码之奇偶校验码and循环冗余码CRC
    【学习总结】快速上手Linux玩转典型应用-第7章-WebServer安装和配置讲解
    【问题解决方案】visudo: /etc/sudoers is busy, try again later
  • 原文地址:https://www.cnblogs.com/jyzhao/p/7305195.html
Copyright © 2020-2023  润新知