• 11gR2 Clusterware 和 Grid Home


    Copyright (c) 2020, Oracle. All rights reserved. Oracle Confidential.
     
     
    Click to add to Favorites Customer Recommended To BottomTo Bottom

    文档内容

      用途
      适用范围
      详细信息
      11gR2 Clusterware 的一些关键特性
      Clusterware 启动顺序
      重要日志的路径
      Clusterware 资源状态检查
      Clusterware 资源管理
      OCRCONFIG 选项
      OLSNODES 选项
      Cluster Verification 选项
      参考

    适用于:

    Oracle Database - Enterprise Edition - 版本 11.2.0.1 到 11.2.0.1 [发行版 11.2]
    Oracle Database Exadata Cloud Machine - 版本 N/A 和更高版本
    Oracle Cloud Infrastructure - Database Service - 版本 N/A 和更高版本
    Oracle Database Cloud Exadata Service - 版本 N/A 和更高版本
    Oracle Database Exadata Express Cloud Service - 版本 N/A 和更高版本
    本文档所含信息适用于所有平台
    Currency checked on Oct 30 2014

    用途

    和之前版本相比,11gR2 Clusterware 有很多改变。关于更多之前版本的信息,请参照文档 Note: 259301.1 "CRS and 10g Real Application Clusters"。这篇文档概述了 11.2 版本的 Clusterware  和之前版本的一些异同。

    适用范围


    这篇文档适合于 RAC 数据库管理域以及 Oracle 技术支持工程师阅读。

    详细信息

    11gR2 Clusterware 的一些关键特性

    • 在安装运行 11gR2 的 Real Application Clusters 数据库之前需要先安装 11gR2 Clusterware。
    • GRID home 包括 Oracle Clusterware 和 ASM。ASM 不能够放在另外单独的 HOME 下。
    • 11gR2 Clusterware 可以安装为"Standalone"模式(以来支持 ASM)或者"Oracle Restart" 模式(单节点). 此时”clusterware”是完整版 clusterware 的子集。
    • 11gR2 Clusterware 可以独立运行,也可以运行在第三方 clusterware 之上。关于支持矩阵请参考文档 Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
    • GRID Home 和 RAC/DB Home 必须安装在不同的路径下。
    • 11gR2 Clusterware 的 OCR 和 voting 文件必须是共享的,它们可以放在 ASM 里或者集群文件系统中。
    • OCR 每4个小时自动备份一次,备份文件放在<GRID_HOME>/cdata/<clustername>/目录下,并且可以使用 ocrconfig 工具恢复。
    • 每次配置改变时,voting file 会被备份到 OCR 中,并且可以使用 crsctl 工具恢复。
    • 11gR2 Clusterware 需要最少一个私有网络(为了节点间的通信)和最少一个公共网络(为了和集群外通信)。多个虚拟 IP 需要注册到 DNS中,包括 node VIPs (每个节点一个), SCAN VIPs (3个)。这可以通过网络管理员手工操作来完成也可以使用”GNS” (Grid Naming Service) 来完成。(注意 GNS 也需要一个自己的 VIP)。
    • 客户端通过 SCAN (Single Client Access Name)来访问数据库。关于 SCAN 的更多信息请参照 Note: 887522.1
    • 集群安装后期,root.sh 会启动 clusterware。 关于如何处理 root.sh 的相关问题,请参照 Note: 1053970.1
    • 每个节点只允许运行一套集群相关的后台进程。 
    • 在 Unix 下, clusterware 是由 init.ohasd 脚本启动的。而 init.ohasd 脚本以"respawn"模式定义在/etc/inittab 中。
    • 如果某个节点被认定为不健康,那么它会被从集群中驱逐(或者重启),以此来维持整个集群的健康。关于更多信息,请参照文档 Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
    • 可以使用第三方时间同步软件(比如 NTP)来保持节点间的时间同步,也可以不使用第三方时间同步软件,而由 CTSS 来同步节点间时间,关于更多信息,请参照文档 Note: 1054006.1 
    • 如果要安装低版本的数据库软件,那么需要在集群中 pin 住节点,否则会碰到 ORA-29702 错误。 更多信息请参照文档 Note 946332.1 以及 Note:948456.1
    • 可以通过启动节点,或者执行"crsctl start crs"来启动集群。也可以执行"crsctl start cluster"来在所有的节点上启动集群。注意 crsctl 在<GRID_HOME>目录,注意"crsctl start cluster"仅在 ohasd 运行的时候才可以工作。
    • 可以通过关闭节点,或者执行"crsctl stop crs"来关闭 clusterware。或者执行""crsctl stop cluster"来关闭所有节点上的 clusterware。注意 crsctl 在<GRID_HOME>目录。
    • 手工杀掉(kill)集群的进程是不支持的。
    • 实例现在在"crsctl stat res -t"的输出中是 .db 资源的一部分,在 11gR2 上,没有单独的 .inst 资源。
    注意,推荐按照文档 Note: 810394.1 的最佳实践来实施。

    Clusterware 启动顺序

    下列是 Clusterware 启动顺序(图片来自文档"Oracle Clusterware Administration and Deployment Guide):

    不要让这张图吓到你。你根本不需要管理所有的这些进程,这是 Clusterware 的工作!

    关于启动顺序的简述: INIT 启动 init.ohasd (以 respawn 参数),而 init.ohasd 启动 OHASD 进程(Oracle High Availability Services Daemon),而 OHASD 又启动其它4个进程。

    第一层:OHASD 启动:

    • cssdagent - 负责启动 CSSD 的 Agent。
    • orarootagent - 负责启动所有 root 用户下的 ohasd 资源 的Agent。
    • oraagent - 负责启动所有 oracle 用户下的 ohasd 资源的 Agent。
    • cssdmonitor - 监控 CSSD 以及节点健康(和 cssdagent 一起)。

    第二层:OHASD rootagent 启动:

    • CRSD - 管理集群资源的主要后台进程。
    • CTSSD - Cluster Time Synchronization Services Daemon
    • Diskmon
    • ACFS (ASM Cluster File System)驱动

    第二层:OHASD oraagent 启动:

    • MDNSD - 用来实现 DNS 查询
    • GIPCD - 用来做节点间通信
    • GPNPD - Grid Plug & Play Profile Daemon
    • EVMD - Event Monitor Daemon
    • ASM - ASM 资源

    第三层:CRSD 启动:

    • orarootagent - 负责启动所有 root 用户下的 crsd 资源的 Agent。
    • oraagent - 负责启动所有 oracle 用户下的 crsd 资源的 Agent。

    第四层:CRSD rootagent 启动:

    • Network resource - 监控公共网络
    • SCAN VIP(s) - Single Client Access Name Virtual IPs
    • Node VIPs - 每个节点1个
    • ACFS Registery - 挂载 ASM Cluster File System
    • GNS VIP (optional) - VIP for GNS

    第四层:CRSD oraagent 启动:

    • ASM Resouce - ASM 资源
    • Diskgroup - 用来管理/监控 ASM 磁盘组
    • DB Resource - 用来管理/监控数据库和实例
    • SCAN Listener - SCAN 监听,监听在 SCAN VIP 上
    • Listener - 节点监听,监听在 Node VIP 上
    • Services - 用来管理/监控 services
    • ONS - Oracle Notification Service
    • eONS - 加强版 Oracle Notification Service
    • GSD - 为了向下兼容 9i
    • GNS (optional) - Grid Naming Service - 处理域名解析

    下面的图片更清晰的列出了各个层级:

    重要日志的路径

    Clusterware 后台进程日志都放在 <GRID_HOME>/log/<nodename>。 <GRID_HOME>/log/<nodename> 下的结构:

    alert<NODENAME>.log – 对于 clusterware 的问题,先检查这个文件
    ./admin:
    ./agent:
    ./agent/crsd:
    ./agent/crsd/oraagent_oracle:
    ./agent/crsd/ora_oc4j_type_oracle:
    ./agent/crsd/orarootagent_root:
    ./agent/ohasd:
    ./agent/ohasd/oraagent_oracle:
    ./agent/ohasd/oracssdagent_root:
    ./agent/ohasd/oracssdmonitor_root:
    ./agent/ohasd/orarootagent_root:
    ./client:
    ./crsd:
    ./cssd:
    ./ctssd:
    ./diskmon:
    ./evmd:
    ./gipcd:
    ./gnsd:
    ./gpnpd:
    ./mdnsd:
    ./ohasd:
    ./racg:
    ./racg/racgeut:
    ./racg/racgevtf:
    ./racg/racgmain:
    ./srvm:

    <GRID_HOME> 和 $ORACLE_BASE 目录下的 cfgtoollogs 目录存放了一些其它的重要日志。比如 rootcrs.pl 以及其它配置工具,比如 ASMCA 等等。

    ASM 日志存放在 $ORACLE_BASE/diag/asm/+asm/<ASM Instance Name>/trace。

    <GRID_HOME>/bin 目录下的 diagcollection.pl 可以自动收集重要的日志。以 root 用户执行它。

    Clusterware 资源状态检查

    下面的命令列出所有集群资源的状态:


    $ ./crsctl status resource -t
    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATADG.dg
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.SYSTEMDG.dg
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.asm
                   ONLINE  ONLINE       racbde1                  Started
                   ONLINE  ONLINE       racbde2                  Started
    ora.eons
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.gsd
                   OFFLINE OFFLINE      racbde1
                   OFFLINE OFFLINE      racbde2
    ora.net1.network
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.ons
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    ora.registry.acfs
                   ONLINE  ONLINE       racbde1
                   ONLINE  ONLINE       racbde2
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       racbde1
    ora.LISTENER_SCAN2.lsnr
          1        ONLINE  ONLINE       racbde2
    ora.LISTENER_SCAN3.lsnr
          1        ONLINE  ONLINE       racbde2
    ora.oc4j
          1        OFFLINE OFFLINE
    ora.rac.db
          1        ONLINE  ONLINE       racbde1                  Open
          2        ONLINE  ONLINE       racbde2                  Open
    ora.racbde1.vip
          1        ONLINE  ONLINE       racbde1
    ora.racbde2.vip
          1        ONLINE  ONLINE       racbde2
    ora.scan1.vip
          1        ONLINE  ONLINE       racbde1
    ora.scan2.vip
          1        ONLINE  ONLINE       racbde2
    ora.scan3.vip
          1        ONLINE  ONLINE       racbde2

    Clusterware 资源管理

    Srvctl 和 crsctl 可以用来管理集群资源。一个简单的原则就是尽量使用 srvctl 来管理资源。 Crsctl 只应该用来做 srvctl 无法操作的情况(比如启动集群)。两者都有 help 功能列出所有的使用语法。

    注意,下面仅列出可用的 srvctl 语法,关于这些命令的更多解释,请参照文档 Oracle Documentation


    Srvctl 语法:

    $ srvctl -h
    Usage: srvctl [-V]
    Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"]
    Usage: srvctl config database [-d <db_unique_name> [-a] ]
    Usage: srvctl start database -d <db_unique_name> [-o <start_options>]
    Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
    Usage: srvctl status database -d <db_unique_name> [-f] [-v]
    Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
    Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
    Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z]
    Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
    Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
    Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
    Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"

    Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
    Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
    Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
    Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
    Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
    Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
    Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
    Usage: srvctl remove instance -d <db_unique_name> [-i <inst_name>] [-f] [-y]

    Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <server_pool> [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>]
    Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
    Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-a]
    Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
    Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
    Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
    Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
    Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
    Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
    Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>]
    Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
           Specify instances for an administrator-managed database, or nodes for a policy managed database
    Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
    Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
    Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]

    Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
    Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
    Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
    Usage: srvctl start nodeapps [-n <node_name>] [-v]
    Usage: srvctl stop nodeapps [-n <node_name>] [-f] [-r] [-v]
    Usage: srvctl status nodeapps
    Usage: srvctl enable nodeapps [-v]
    Usage: srvctl disable nodeapps [-v]
    Usage: srvctl remove nodeapps [-f] [-y] [-v]
    Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "<name_list>"]
    Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
    Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]

    Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
    Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
    Usage: srvctl disable vip -i <vip_name> [-v]
    Usage: srvctl enable vip -i <vip_name> [-v]
    Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
    Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
    Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
    Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
    Usage: srvctl status vip { -n <node_name> | -i <vip_name> }
    Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
    Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]

    Usage: srvctl add asm [-l <lsnr_name>]
    Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
    Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
    Usage: srvctl config asm [-a]
    Usage: srvctl status asm [-n <node_name>] [-a]
    Usage: srvctl enable asm [-n <node_name>]
    Usage: srvctl disable asm [-n <node_name>]
    Usage: srvctl modify asm [-l <lsnr_name>]
    Usage: srvctl remove asm [-f]
    Usage: srvctl getenv asm [-t <name>[, ...]]
    Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
    Usage: srvctl unsetenv asm -t "<name>[, ...]"

    Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
    Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
    Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a]
    Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
    Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
    Usage: srvctl remove diskgroup -g <dg_name> [-f]

    Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
    Usage: srvctl config listener [-l <lsnr_name>] [-a]
    Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
    Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
    Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>]
    Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
    Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
    Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
    Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
    Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
    Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
    Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"

    Usage: srvctl add scan -n <scan_name> [-k <network_number> [-S <subnet>/<netmask>[/if1[|if2|...]]]]
    Usage: srvctl config scan [-i <ordinal_number>]
    Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
    Usage: srvctl stop scan [-i <ordinal_number>] [-f]
    Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
    Usage: srvctl status scan [-i <ordinal_number>]
    Usage: srvctl enable scan [-i <ordinal_number>]
    Usage: srvctl disable scan [-i <ordinal_number>]
    Usage: srvctl modify scan -n <scan_name>
    Usage: srvctl remove scan [-f] [-y]
    Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]
    Usage: srvctl config scan_listener [-i <ordinal_number>]
    Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
    Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
    Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
    Usage: srvctl status scan_listener [-i <ordinal_number>]
    Usage: srvctl enable scan_listener [-i <ordinal_number>]
    Usage: srvctl disable scan_listener [-i <ordinal_number>]
    Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}
    Usage: srvctl remove scan_listener [-f] [-y]

    Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
    Usage: srvctl config srvpool [-g <pool_name>]
    Usage: srvctl status srvpool [-g <pool_name>] [-a]
    Usage: srvctl status server -n "<server_list>" [-a]
    Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
    Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
    Usage: srvctl remove srvpool -g <pool_name>

    Usage: srvctl add oc4j [-v]
    Usage: srvctl config oc4j
    Usage: srvctl start oc4j [-v]
    Usage: srvctl stop oc4j [-f] [-v]
    Usage: srvctl relocate oc4j [-n <node_name>] [-v]
    Usage: srvctl status oc4j [-n <node_name>]
    Usage: srvctl enable oc4j [-n <node_name>] [-v]
    Usage: srvctl disable oc4j [-n <node_name>] [-v]
    Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v]
    Usage: srvctl remove oc4j [-f] [-v]

    Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
    Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
    Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>

    Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
    Usage: srvctl config filesystem -d <volume_device>
    Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
    Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
    Usage: srvctl status filesystem -d <volume_device>
    Usage: srvctl enable filesystem -d <volume_device>
    Usage: srvctl disable filesystem -d <volume_device>
    Usage: srvctl modify filesystem -d <volume_device> -u <user>
    Usage: srvctl remove filesystem -d <volume_device> [-f]

    Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]
    Usage: srvctl stop gns [-v] [-n <node_name>] [-f]
    Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V]
    Usage: srvctl status gns -n <node_name>
    Usage: srvctl enable gns [-v] [-n <node_name>]
    Usage: srvctl disable gns [-v] [-n <node_name>]
    Usage: srvctl relocate gns [-v] [-n <node_name>] [-f]
    Usage: srvctl add gns [-v] -d <domain> -i <vip_name|ip> [-k <network_number> [-S <subnet>/<netmask>[/<interface>]]]
    srvctl modify gns [-v] [-f] [-l <log_level>] [-d <domain>] [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>]
    Usage: srvctl remove gns [-f] [-d <domain_name>]


    Crsctl 语法(关于这些命令的更多解释,请参考文档 Oracle Documentation

    $ ./crsctl -h
    Usage: crsctl add       - add a resource, type or other entity
           crsctl check     - check a service, resource or other entity
           crsctl config    - output autostart configuration
           crsctl debug     - obtain or modify debug state
           crsctl delete    - delete a resource, type or other entity
           crsctl disable   - disable autostart
           crsctl enable    - enable autostart
           crsctl get       - get an entity value
           crsctl getperm   - get entity permissions
           crsctl lsmodules - list debug modules
           crsctl modify    - modify a resource, type or other entity
           crsctl query     - query service state
           crsctl pin       - Pin the nodes in the nodelist
           crsctl relocate  - relocate a resource, server or other entity
           crsctl replace   - replaces the location of voting files
           crsctl setperm   - set entity permissions
           crsctl set       - set an entity value
           crsctl start     - start a resource, server or other entity
           crsctl status    - get status of a resource or other entity
           crsctl stop      - stop a resource, server or other entity
           crsctl unpin     - unpin the nodes in the nodelist
           crsctl unset     - unset a entity value, restoring its default


    关于每个命令的更多语法,运行"crsctl <command> -h"来得到。

    OCRCONFIG 选项

    注意,下面仅列出可用的 ocrconfig 语法,关于这些命令的更多解释,请参照文档 Oracle Documentation
    $ ./ocrconfig -help
    Name:
            ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

    Synopsis:
            ocrconfig [option]
            option:
                    [-local] -export <filename>
                                                        - Export OCR/OLR contents to a file
                    [-local] -import <filename>         - Import OCR/OLR contents from a file
                    [-local] -upgrade [<user> [<group>]]
                                                        - Upgrade OCR from previous version
                    -downgrade [-version <version string>]
                                                        - Downgrade OCR to the specified version
                    [-local] -backuploc <dirname>       - Configure OCR/OLR backup location
                    [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                    [-local] -manualbackup              - Perform OCR/OLR backup
                    [-local] -restore <filename>        - Restore OCR/OLR from physical backup
                    -replace <current filename> -replacement <new filename>
                                                        - Replace a OCR device/file <filename1> with <filename2>
                    -add <filename>                     - Add a new OCR device/file
                    -delete <filename>                  - Remove a OCR device/file
                    -overwrite                          - Overwrite OCR configuration on disk
                    -repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
                                                        - Repair OCR configuration on the local node
                    -help                               - Print out this help information

    Note:
            * A log file will be created in
            $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
            you have file creation privileges in the above directory before
            running this tool.
            * Only -local -showbackup [manual] is supported.
            * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry

    OLSNODES 选项

    注意,下面仅列出可用的 olsnodes 语法,关于这些命令的更多解释,请参照文档 Oracle Documentation
    $ ./olsnodes -h
    Usage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
            where
                    -n print node number with the node name
                    -p print private interconnect address for the local node
                    -i print virtual IP address with the node name
                    <node> print information for the specified node
                    -l print information for the local node
                    -s print node status - active or inactive
                    -t print node type - pinned or unpinned
                    -g turn on logging
                    -v Run in debug mode; use at direction of Oracle Support only.
                    -c print clusterware name

    Cluster Verification 选项

    注意,下面仅列出可用的 cluvfy 语法,关于这些命令的更多解释,请参照文档 Oracle Documentation


    组件选项:

    $ ./cluvfy comp -list

    USAGE:
    cluvfy comp  <component-name> <component-specific options>  [-verbose]

    Valid components are:
            nodereach : checks reachability between nodes
            nodecon   : checks node connectivity
            cfs       : checks CFS integrity
            ssa       : checks shared storage accessibility
            space     : checks space availability
            sys       : checks minimum system requirements
            clu       : checks cluster integrity
            clumgr    : checks cluster manager integrity
            ocr       : checks OCR integrity
            olr       : checks OLR integrity
            ha        : checks HA integrity
            crs       : checks CRS integrity
            nodeapp   : checks node applications existence
            admprv    : checks administrative privileges
            peer      : compares properties with peers
            software  : checks software distribution
            asm       : checks ASM integrity
            acfs       : checks ACFS integrity
            gpnp      : checks GPnP integrity
            gns       : checks GNS integrity
            scan      : checks SCAN configuration
            ohasd     : checks OHASD integrity
            clocksync      : checks Clock Synchronization
            vdisk      : check Voting Disk Udev settings



    Stage 选项:

    $ ./cluvfy stage -list

    USAGE:
    cluvfy stage {-pre|-post} <stage-name> <stage-specific options>  [-verbose]

    Valid stage options and stage names are:
            -post hwos    :  post-check for hardware and operating system
            -pre  cfs     :  pre-check for CFS setup
            -post cfs     :  post-check for CFS setup
            -pre  crsinst :  pre-check for CRS installation
            -post crsinst :  post-check for CRS installation
            -pre  hacfg   :  pre-check for HA configuration
            -post hacfg   :  post-check for HA configuration
            -pre  dbinst  :  pre-check for database installation
            -pre  acfscfg  :  pre-check for ACFS Configuration.
            -post acfscfg  :  post-check for ACFS Configuration.
            -pre  dbcfg   :  pre-check for database configuration
            -pre  nodeadd :  pre-check for node addition.
            -post nodeadd :  post-check for node addition.
            -post nodedel :  post-check for node deletion.
  • 相关阅读:
    Post请求的两种编码格式:application/x-www-form-urlencoded和multipart/form-data传参方式
    工作中常用的JavaScript函数片段
    解决导入导出Excel表格文字乱码问题
    清空antd-design时间选择组件 RangePicker的值
    react.js Hooks路由跳转
    linux跳板机服务器搭建
    docker及docker-compose学习
    Android Jenkins+Git+Gradle持续集成
    Windows Server 2008 R2常规安全设置及基本安全策略
    ubuntu lnmp安装及php扩展
  • 原文地址:https://www.cnblogs.com/muzisanshi/p/12876308.html
Copyright © 2020-2023  润新知