• Oracle 11.2 RAC 删除节点


    软硬件环境:与上一篇文章一致;

    一般对 CRS 层面数据结构做重要操作之前一定要先备份 OCR 

    [root@vastdata4 ~]# ocrconfig -manualbackup
    vastdata4     2019/02/25 00:04:20     /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocr
    vastdata4     2019/02/25 00:00:08     /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr

    RAC删除节点简单为分 3 个步骤:

    1. 删除实例;

    2. 删除 DB 软件;

    3. 删除 GI 软件;


    1.删除实例
    1.1关闭计划删除的目标实例

    [root@vastdata4 ~]# srvctl status database -d PROD -help
    Displays the current state of the database.
    Usage: srvctl status database -d <db_unique_name> [-f] [-v]
        -d <db_unique_name>      Unique name for the database
        -f                       Include disabled applications
        -v                       Verbose output
    -h                       Print usage
      
    [root@vastdata4 ~]# srvctl status database -d PROD -f
    Instance PROD1 is running on node vastdata3
    Instance PROD2 is running on node vastdata4
      
    [root@vastdata4 ~]# srvctl stop instance -d PROD -n vastdata3
      
    [root@vastdata4 ~]# crsctl stat res -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Local Resources
    -------------------------------------------------------------------------------- 
    ora.prod.db
          1        OFFLINE OFFLINE                               Instance Shutdown   
          2        ONLINE  ONLINE       vastdata4                Open    
                 
    [root@vastdata4 ~]# srvctl status database -d PROD -f
    Instance PROD1 is not running on node vastdata3
    Instance PROD2 is running on node vastdata4

    1.2删除实例

    [oracle@vastdata4 ~]$ dbca -silent -deleteInstance -nodeList vastdata3.us.oracle.com -gdbName PROD -instanceName PROD1 -sysDBAUserName sys -sysDBAPassword oracle
    Deleting instance
    1% complete
    2% complete
    6% complete
    13% complete
    20% complete
    26% complete
    33% complete
    40% complete
    46% complete
    53% complete
    60% complete
    66% complete
    Completing instance management.
    100% complete
    Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/PROD.log" for further details.

    1.3再次检查

    [root@vastdata4 ~]# srvctl status database -d PROD -f
    Instance PROD2 is running on node vastdata4

    2.删除 DB 软件
    2.1更新 inventory

    [oracle@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" -local
    Starting Oracle Universal Installer...
      
    Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    2.2卸载 DB 软件

    [oracle@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
    Checking for required files and bootstrapping ...
    Please wait ...
    Location of logs /u01/app/oraInventory/logs/
      
    ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
    ######################### CHECK OPERATION START #########################
    ## [START] Install check configuration ##
      
    Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/db_1
    Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
    Oracle Base selected for deinstall is: /u01/app/oracle
    Checking for existence of central inventory location /u01/app/oraInventory
    Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
    The following nodes are part of this cluster: vastdata3
    Checking for sufficient temp space availability on node(s) : 'vastdata3'
    ## [END] Install check configuration ##
      
    Network Configuration check config START
      
    Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-02-25_12-27-57-AM.log
      
    Network Configuration check config END
      
    Database Check Configuration START
      
    Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-02-25_12-27-59-AM.log
      
    Database Check Configuration END
      
    Enterprise Manager Configuration Assistant START
      
    EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2019-02-25_12-28-02-AM.log 
      
    Enterprise Manager Configuration Assistant END
    Oracle Configuration Manager check START
    OCM check log file location : /u01/app/oraInventory/logs//ocm_check2332.log
    Oracle Configuration Manager check END
      
    ######################### CHECK OPERATION END #########################
    ####################### CHECK OPERATION SUMMARY #######################
    Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
    The cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3
    Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.
    Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/db_1
    Inventory Location where the Oracle home registered is: /u01/app/oraInventory
    The option -local will not modify any database configuration for this Oracle home.
      
    No Enterprise Manager configuration to be updated for any database(s)
    No Enterprise Manager ASM targets to update
    No Enterprise Manager listener targets to migrate
    Checking the config status for CCR
    Oracle Home exists with CCR directory, but CCR is not configured
    CCR check is finished
    Do you want to continue (y - yes, n - no)? [n]: y
    A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.out'
    Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-02-25_12-27-50-AM.err'
    ######################## CLEAN OPERATION START ########################
      
    Enterprise Manager Configuration Assistant START
      
    EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2019-02-25_12-28-02-AM.log 
      
    Updating Enterprise Manager ASM targets (if any)
    Updating Enterprise Manager listener targets (if any)
    Enterprise Manager Configuration Assistant END
    Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-02-25_12-29-25-AM.log
      
    Network Configuration clean config START
      
    Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-02-25_12-29-25-AM.log
      
    De-configuring Local Net Service Names configuration file...
    Local Net Service Names configuration file de-configured successfully.
      
    De-configuring backup files...
    Backup files de-configured successfully.
      
    The network configuration has been cleaned up successfully.
      
    Network Configuration clean config END
      
    Oracle Configuration Manager clean START
    OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean2332.log
    Oracle Configuration Manager clean END
    Setting the force flag to false
    Setting the force flag to cleanup the Oracle Base
    Oracle Universal Installer clean START
      
    Detach Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node : Done
      
    Delete directory '/u01/app/oracle/product/11.2.0/db_1' on the local node : Done
      
    Failed to delete the directory '/u01/app/oracle'. The directory is in use.
    Delete directory '/u01/app/oracle' on the local node : Failed <<<<
      
    Oracle Universal Installer cleanup completed with errors.
      
    Oracle Universal Installer clean END
      
    ## [START] Oracle install clean ##
      
    Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-27-29AM' on node 'vastdata3'
      
    ## [END] Oracle install clean ##
    ######################### CLEAN OPERATION END #########################
    ####################### CLEAN OPERATION SUMMARY #######################
    Cleaning the config for CCR
    As CCR is not configured, so skipping the cleaning of CCR configuration
    CCR clean is finished
    Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/db_1' from the central inventory on the local node.
    Successfully deleted directory '/u01/app/oracle/product/11.2.0/db_1' on the local node.
    Failed to delete directory '/u01/app/oracle' on the local node.
    Oracle Universal Installer cleanup completed with errors.
      
    Oracle deinstall tool successfully cleaned up temporary directories.
    #######################################################################
    ############# ORACLE DEINSTALL & DECONFIG TOOL END #############

    2.3更新 inventory

    [oracle@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" -local
    Starting Oracle Universal Installer...
      
    Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    2.4如果卸载不干净,需要人为手工执行下面命令

    [oracle@vastdata3 ~]$ rm -rf $ORACLE_HOME/*

    3.删除 GI 软件
    3.1检查被删除节点状态

    [grid@vastdata4 ~]$ olsnodes -s -n -t
    vastdata3      1      Active      Unpinned
    vastdata4      2      Active      Unpinned

    3.2节点被 PIN 住,需要 UNPIN

    [root@vastdata4 ~]# crsctl unpin css -n vastdata3
    CRS-4667: Node vastdata3 successfully unpinned.

    3.3停止被删节点 HAS 服务

    [root@vastdata3 ~]# export ORACLE_HOME=/u01/app/11.2.0/grid
    [root@vastdata3 ~]# cd $ORACLE_HOME/crs/install
    [root@vastdata3 install]# perl rootcrs.pl -deconfig -force
    Using configuration parameter file: ./crsconfig_params
    Network exists: 1/192.168.0.0/255.255.255.0/eth0, type static
    VIP exists: /vastdata3-vip/192.168.0.22/192.168.0.0/255.255.255.0/eth0, hosting node vastdata3
    VIP exists: /vastdata4-vip/192.168.0.23/192.168.0.0/255.255.255.0/eth0, hosting node vastdata4
    GSD exists
    ONS exists: Local port 6100, remote port 6200, EM port 2016
    CRS-2673: Attempting to stop 'ora.registry.acfs' on 'vastdata3'
    CRS-2677: Stop of 'ora.registry.acfs' on 'vastdata3' succeeded
    CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.crsd' on 'vastdata3'
    CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.DATA.dg' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.FRA.dg' on 'vastdata3'
    CRS-2677: Stop of 'ora.FRA.dg' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.DATA.dg' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'
    CRS-2677: Stop of 'ora.asm' on 'vastdata3' succeeded
    CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'vastdata3' has completed
    CRS-2677: Stop of 'ora.crsd' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.mdnsd' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.crf' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.ctssd' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.evmd' on 'vastdata3'
    CRS-2673: Attempting to stop 'ora.asm' on 'vastdata3'
    CRS-2677: Stop of 'ora.mdnsd' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.crf' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.evmd' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.asm' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'vastdata3'
    CRS-2677: Stop of 'ora.ctssd' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.drivers.acfs' on 'vastdata3' succeeded
    CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.cssd' on 'vastdata3'
    CRS-2677: Stop of 'ora.cssd' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'vastdata3'
    CRS-2677: Stop of 'ora.gipcd' on 'vastdata3' succeeded
    CRS-2673: Attempting to stop 'ora.gpnpd' on 'vastdata3'
    CRS-2677: Stop of 'ora.gpnpd' on 'vastdata3' succeeded
    CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'vastdata3' has completed
    CRS-4133: Oracle High Availability Services has been stopped.
    Removing Trace File Analyzer
    Successfully deconfigured Oracle clusterware stack on this node

    3.4检查集群资源状态

    [root@vastdata4 ~]# crsctl stat res -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATA.dg
                   ONLINE  ONLINE       vastdata4                                    
    ora.FRA.dg
                   ONLINE  ONLINE       vastdata4                                    
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       vastdata4                                    
    ora.asm
                   ONLINE  ONLINE       vastdata4                Started             
    ora.gsd
                   OFFLINE OFFLINE      vastdata4                                    
    ora.net1.network
                   ONLINE  ONLINE       vastdata4                                    
    ora.ons
                   ONLINE  ONLINE       vastdata4                                    
    ora.registry.acfs
                   ONLINE  ONLINE       vastdata4                                    
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       vastdata4                                    
    ora.LISTENER_SCAN2.lsnr
          1        ONLINE  ONLINE       vastdata4                                    
    ora.LISTENER_SCAN3.lsnr
          1        ONLINE  ONLINE       vastdata4                                    
    ora.cvu
          1        ONLINE  ONLINE       vastdata4                                    
    ora.oc4j
          1        ONLINE  ONLINE       vastdata4                                    
    ora.prod.db
          2        ONLINE  ONLINE       vastdata4                Open                
    ora.scan1.vip
          1        ONLINE  ONLINE       vastdata4                                    
    ora.scan2.vip
          1        ONLINE  ONLINE       vastdata4                                    
    ora.scan3.vip
          1        ONLINE  ONLINE       vastdata4                                    
    ora.vastdata4.vip
          1        ONLINE  ONLINE       vastdata4

    3.5检查集群下所有节点状态

    [root@vastdata4 ~]# olsnodes -s -n -t
    vastdata3        1      Inactive    Unpinned
    vastdata4     2      Active      Unpinned

    3.6更新 inventory

    [grid@vastdata3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata3" CRS=TRUE -silent -local
    Starting Oracle Universal Installer...
      
    Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    3.7卸载 GI 软件

    [grid@vastdata3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
    Checking for required files and bootstrapping ...
    Please wait ...
    Location of logs /tmp/deinstall2019-02-25_00-43-06AM/logs/
      
    ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
      
    ######################### CHECK OPERATION START #########################
    ## [START] Install check configuration ##
      
    Checking for existence of the Oracle home location /u01/app/11.2.0/grid
    Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
    Oracle Base selected for deinstall is: /u01/app/grid
    Checking for existence of central inventory location /u01/app/oraInventory
    Checking for existence of the Oracle Grid Infrastructure home 
    The following nodes are part of this cluster: vastdata3
    Checking for sufficient temp space availability on node(s) : 'vastdata3'
      
    ## [END] Install check configuration ##
      
    Traces log file: /tmp/deinstall2019-02-25_00-43-06AM/logs//crsdc.log
    Enter an address or the name of the virtual IP used on node "vastdata3"[vastdata3-vip]
     > 
      
    The following information can be collected by running "/sbin/ifconfig -a" on node "vastdata3"
    Enter the IP netmask of Virtual IP "192.168.0.22" on node "vastdata3"[255.255.255.0]
     > 
      
    Enter the network interface name on which the virtual IP address "192.168.0.22" is active
     > 
      
    Enter an address or the name of the virtual IP[]
     > 
      
    Network Configuration check config START
      
    Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_check2019-02-25_12-44-14-AM.log
      
    Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
      
    Network Configuration check config END
      
    Asm Check Configuration START
      
    ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_check2019-02-25_12-44-19-AM.log
      
    ######################### CHECK OPERATION END #########################
      
    ####################### CHECK OPERATION SUMMARY #######################
    Oracle Grid Infrastructure Home is: 
    The cluster node(s) on which the Oracle home deinstallation will be performed are:vastdata3
    Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'vastdata3', and the global configuration will be removed.
    Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
    Inventory Location where the Oracle home registered is: /u01/app/oraInventory
    Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
    Option -local will not modify any ASM configuration.
    Do you want to continue (y - yes, n - no)? [n]: y
    A log of this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.out'
    Any error messages from this session will be written to: '/tmp/deinstall2019-02-25_00-43-06AM/logs/deinstall_deconfig2019-02-25_12-43-22-AM.err'
      
    ######################## CLEAN OPERATION START ########################
    ASM de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/asmcadc_clean2019-02-25_12-44-27-AM.log
    ASM Clean Configuration END
      
    Network Configuration clean config START
      
    Network de-configuration trace file location: /tmp/deinstall2019-02-25_00-43-06AM/logs/netdc_clean2019-02-25_12-44-27-AM.log
      
    De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
      
    De-configuring listener: LISTENER
        Stopping listener on node "vastdata3": LISTENER
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
      
    De-configuring listener: LISTENER_SCAN3
        Stopping listener on node "vastdata3": LISTENER_SCAN3
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
      
    De-configuring listener: LISTENER_SCAN2
        Stopping listener on node "vastdata3": LISTENER_SCAN2
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
      
    De-configuring listener: LISTENER_SCAN1
        Stopping listener on node "vastdata3": LISTENER_SCAN1
        Warning: Failed to stop listener. Listener may not be running.
    Listener de-configured successfully.
      
    De-configuring Naming Methods configuration file...
    Naming Methods configuration file de-configured successfully.
      
    De-configuring backup files...
    Backup files de-configured successfully.
      
    The network configuration has been cleaned up successfully.
      
    Network Configuration clean config END
    ---------------------------------------->
      
    The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.
      
    Run the following command as the root user or the administrator on node "vastdata3".
      
    /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
      
    Press Enter after you finish running the above commands
    <---------------------------------------- 
    [root@vastdata3 ~]# /tmp/deinstall2019-02-25_00-43-06AM/perl/bin/perl   -I/tmp/deinstall2019-02-25_00-43-06AM/perl/lib   -I/tmp/deinstall2019-02-25_00-43-06AM/crs/install   /tmp/deinstall2019-02-25_00-43-06AM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp"Using configuration parameter file:   /tmp/deinstall2019-02-25_00-43-06AM/response/deinstall_Ora11g_gridinfrahome1.rsp****Unable to retrieve Oracle Clusterware home.Start Oracle Clusterware stack and try again.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Stop failed, or completed with   errors.Either /etc/oracle/ocr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessEither /etc/oracle/ocr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessCRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Modify failed, or completed   with errors.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Delete failed, or completed   with errors.CRS-4047: No Oracle Clusterware components   configured.CRS-4000: Command Stop failed, or completed with   errors.################################################################# You must kill processes or reboot the system   to properly ## cleanup the processes started by Oracle   clusterware          #################################################################ACFS-9313: No ADVM/ACFS installation detected.Either /etc/oracle/olr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessEither /etc/oracle/olr.loc does not exist or is   not readableMake sure the file exists and it has read and   execute accessFailure in execution (rc=-1, 256, No such file   or directory) for command /etc/init.d/ohasd deinstallerror: package cvuqdisk is not installedSuccessfully   deconfigured Oracle clusterware stack on this node    
    Remove the directory: /tmp/deinstall2019-02-25_00-43-06AM on node: 
    Setting the force flag to false
    Setting the force flag to cleanup the Oracle Base
    Oracle Universal Installer clean START
      
    Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
      
    Delete directory '/u01/app/11.2.0/grid' on the local node : Done
      
    Delete directory '/u01/app/oraInventory' on the local node : Done
      
    Delete directory '/u01/app/grid' on the local node : Done
      
    Oracle Universal Installer cleanup was successful.
      
    Oracle Universal Installer clean END
      
    ## [START] Oracle install clean ##
      
    Clean install operation removing temporary directory '/tmp/deinstall2019-02-25_00-43-06AM' on node 'vastdata3'
      
    ## [END] Oracle install clean ##
      
    ######################### CLEAN OPERATION END #########################
      
    ####################### CLEAN OPERATION SUMMARY #######################
    Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
    Oracle Clusterware is stopped and successfully de-configured on node "vastdata3"
    Oracle Clusterware is stopped and de-configured successfully.
    Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
    Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
    Successfully deleted directory '/u01/app/oraInventory' on the local node.
    Successfully deleted directory '/u01/app/grid' on the local node.
    Oracle Universal Installer cleanup was successful.
      
    Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'vastdata3' at the end of the session.
      
    Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'vastdata3' at the end of the session.
    Run 'rm -rf /etc/oratab' as root on node(s) 'vastdata3' at the end of the session.
    Oracle deinstall tool successfully cleaned up temporary directories.
    #######################################################################
      
    ############# ORACLE DEINSTALL & DECONFIG TOOL END #############

    3.8更新 inventory

    [grid@vastdata4 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=vastdata4" CRS=TRUE -silent
    Starting Oracle Universal Installer...
      
    Checking swap space: must be greater than 500 MB.   Actual 6143 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    The inventory is located at /u01/app/oraInventory
    'UpdateNodeList' was successful.

    3.9检查集群下所有节点状态

    [grid@vastdata4 ~]$ olsnodes -s
    vastdata3 Inactive
    vastdata4 Active

    3.10如果卸载不干净,需要人为手工执行下面命令

    ps -ef |grep ora |awk '{print $2}' |xargs kill -9
    ps -ef |grep grid |awk '{print $2}' |xargs kill -9
    ps -ef |grep asm |awk '{print $2}' |xargs kill -9
    ps -ef |grep storage |awk '{print $2}' |xargs kill -9
    ps -ef |grep ohasd |awk '{print $2}' |xargs kill -9 
    ps -ef |grep grid
    ps -ef |grep ora
    ps -ef |grep asm
      
    export ORACLE_BASE=/u01/app/grid
    export ORACLE_HOME=/u01/app/11.2.0/grid
    cd $ORACLE_HOME
    rm -rf *
    cd $ORACLE_BASE
    rm -rf *
      
    rm -rf /etc/rc5.d/S96ohasd
    rm -rf /etc/rc3.d/S96ohasd
    rm -rf /rc.d/init.d/ohasd
    rm -rf /etc/oracle
    rm -rf /etc/ora*
    rm -rf /etc/oratab
    rm -rf /etc/oraInst.loc
    rm -rf /opt/ORCLfmap/
    rm -rf /taryartar/12c/oraInventory
    rm -rf /usr/local/bin/dbhome
    rm -rf /usr/local/bin/oraenv
    rm -rf /usr/local/bin/coraenv
    rm -rf /tmp/*
    rm -rf /var/tmp/.oracle
    rm -rf /var/tmp
    rm -rf /home/grid/*
    rm -rf /home/oracle/*
    rm -rf /etc/init/oracle*
    rm -rf /etc/init.d/ora
    rm -rf /tmp/.*

    3.11从集群中删除节点

    [root@vastdata4 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n vastdata3
    CRS-4661: Node vastdata3 successfully deleted.

    3.12检查集群下所有节点状态

    [root@vastdata4 ~]# olsnodes -s
    vastdata4 Active

    3.13检查节点删除是否成功

    这步非常重要,关系以后是否可以顺利增加节点到集群中。

    [grid@vastdata4 ~]$ cluvfy stage -post nodedel -n vastdata3 -verbose
      
    Performing post-checks for node removal 
      
    Checking CRS integrity...
      
    Clusterware version consistency passed
    The Oracle Clusterware is healthy on node "vastdata4"
      
    CRS integrity check passed
    Result: 
    Node removal check passed
      
    Post-check for node removal was successful.

    3.14备份 OCR

    [root@vastdata4 ~]# ocrconfig -manualbackup
    vastdata4     2019/02/25 00:53:53     /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_005353.ocr
    vastdata4     2019/02/25 00:04:20     /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000420.ocr
    vastdata4     2019/02/25 00:00:08     /u01/app/11.2.0/grid/cdata/vastdat-cluster/backup_20190225_000008.ocr

    至此,Oracle RAC架构搭建完成。

    如有转载,请标明出处。

     

    我报路长嗟日暮,学诗谩有惊人句。 九万里风鹏正举。风休住,蓬舟吹取三山去!
  • 相关阅读:
    看完这篇 HTTPS,和面试官扯皮就没问题了
    spring boot——参数传递——POST访问——html网页post提交——通过bean获取参数
    spring boot——参数传递——POST访问——html网页post提交——HttpServletRequest request——request.getParameter(KEY)获取values
    spring boot——参数传递——POST访问——html网页post提交
    spring boot——参数传递——POST访问——html网页post提交——@RequestParam获取参数——(@RequestParam("username") String username,@RequestParam("password") String password)
    数学教授为什么仍然使用粉笔?
    git log History Simplification
    Get first day of week in SQL Server
    How to commit no change and new message?
    Go语言从切片中删除元素
  • 原文地址:https://www.cnblogs.com/klyyk0950/p/12503577.html
Copyright © 2020-2023  润新知