• Linux平台 Oracle 18c RAC安装


    Linux平台 Oracle 18c RAC安装Part1:准备工作

    2018-08-04 22:20 by AlfredZhao, 1065 阅读, 0 评论, 收藏编辑

    一、实施前期准备工作

    二、安装前期准备工作

    Linux平台 Oracle 18c RAC安装指导:

    Part1:Linux平台 Oracle 18c RAC安装Part1:准备工作

    Part2:Linux平台 Oracle 18c RAC安装Part2:GI配置

    Part3:Linux平台 Oracle 18c RAC安装Part3:DB配置

    本文安装环境:OEL 7.5 + Oracle 18.3 GI & RAC

    一、实施前期准备工作

    1.1 服务器安装操作系统

    配置完全相同的两台服务器,安装相同版本的Linux操作系统。留存系统光盘或者镜像文件。

    我这里是OEL7.5,系统目录大小均一致。对应OEL7.5的系统镜像文件放在服务器上,供后面配置本地yum使用。

    1.2 Oracle安装介质

    Oracle 18.3 版本2个zip包(总大小9G+,注意空间):

    LINUX.X64_180000_grid_home.zip MD5: CD42D137FD2A2EEB4E911E8029CC82A9

    LINUX.X64_180000_db_home.zip MD5: 99A7C4A088A8A502C261E741A8339AE8

    这个自己去Oracle官网下载,然后只需要上传到节点1即可。

    1.3 共享存储规划

    从存储中划分出两台主机可以同时看到的共享LUN,3个1G的盘用作OCR和Voting Disk,1个40G的盘做GIMR,其余规划做数据盘和FRA。

    根据实际需要选择multipath或者udev绑定设备。这里选用multipath绑定。

    multipath -ll multipath -F multipath -v2 multipath -ll

    我这里实验环境,存储划分的LUN是通过一台iSCSI服务器模拟的,下面是服务端主要配置信息:

    o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 8] | | o- disk1 ...................................................... [/dev/mapper/vg_storage-lv_lun1 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk2 ...................................................... [/dev/mapper/vg_storage-lv_lun2 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk3 ...................................................... [/dev/mapper/vg_storage-lv_lun3 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk4 ..................................................... [/dev/mapper/vg_storage-lv_lun4 (40.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk5 ..................................................... [/dev/mapper/vg_storage-lv_lun5 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk6 ..................................................... [/dev/mapper/vg_storage-lv_lun6 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk7 ..................................................... [/dev/mapper/vg_storage-lv_lun7 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk8 ..................................................... [/dev/mapper/vg_storage-lv_lun8 (16.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c ....................................................... [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 1] | | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c:client ................................... [Mapped LUNs: 8] | | o- mapped_lun0 ................................................................................. [lun0 block/disk1 (rw)] | | o- mapped_lun1 ................................................................................. [lun1 block/disk2 (rw)] | | o- mapped_lun2 ................................................................................. [lun2 block/disk3 (rw)] | | o- mapped_lun3 ................................................................................. [lun3 block/disk4 (rw)] | | o- mapped_lun4 ................................................................................. [lun4 block/disk5 (rw)] | | o- mapped_lun5 ................................................................................. [lun5 block/disk6 (rw)] | | o- mapped_lun6 ................................................................................. [lun6 block/disk7 (rw)] | | o- mapped_lun7 ................................................................................. [lun7 block/disk8 (rw)] | o- luns .......................................................................................................... [LUNs: 8] | | o- lun0 ................................................ [block/disk1 (/dev/mapper/vg_storage-lv_lun1) (default_tg_pt_gp)] | | o- lun1 ................................................ [block/disk2 (/dev/mapper/vg_storage-lv_lun2) (default_tg_pt_gp)] | | o- lun2 ................................................ [block/disk3 (/dev/mapper/vg_storage-lv_lun3) (default_tg_pt_gp)] | | o- lun3 ................................................ [block/disk4 (/dev/mapper/vg_storage-lv_lun4) (default_tg_pt_gp)] | | o- lun4 ................................................ [block/disk5 (/dev/mapper/vg_storage-lv_lun5) (default_tg_pt_gp)] | | o- lun5 ................................................ [block/disk6 (/dev/mapper/vg_storage-lv_lun6) (default_tg_pt_gp)] | | o- lun6 ................................................ [block/disk7 (/dev/mapper/vg_storage-lv_lun7) (default_tg_pt_gp)] | | o- lun7 ................................................ [block/disk8 (/dev/mapper/vg_storage-lv_lun8) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] />

    关于这部分相关的知识点可以参考之前的文章:

    关于udev + multipath 的最简配置(可在后续创建用户后操作):

    # vi /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" # udevadm control --reload # udevadm trigger

    1.4 网络规范分配

    公有网络 以及 私有网络。

    公有网络:这里实验环境是enp0s3是public IP,enp0s8是private IP,enp0s9和enp0s10是用于模拟IPSAN的两条网络。实际生产需根据实际情况调整规划。

    二、安装前期准备工作

    2.1 各节点系统时间校对

    各节点系统时间校对:

    --检验时间和时区确认正确 date --关闭chrony服务,移除chrony配置文件(后续使用ctss) systemctl list-unit-files|grep chronyd systemctl status chronyd systemctl disable chronyd systemctl stop chronyd mv /etc/chrony.conf /etc/chrony.conf_bak

    这里实验环境,选择不使用NTP和chrony,这样Oracle会自动使用自己的ctss服务。

    2.2 各节点关闭防火墙和SELinux

    各节点关闭防火墙:

    systemctl list-unit-files|grep firewalld systemctl status firewalld systemctl disable firewalld systemctl stop firewalld

    各节点关闭SELinux:

    getenforce cat /etc/selinux/config 手工修改/etc/selinux/config SELINUX=disabled,或使用下面命令: sed -i '/^SELINUX=.*/ s//SELINUX=disabled/' /etc/selinux/config setenforce 0

    最后核实各节点已经关闭SELinux即可。

    2.3 各节点检查系统依赖包安装情况

    yum install -y oracle-database-server-12cR2-preinstall.x86_64

    在OEL7.5中还是12cR2-preinstall的名字,并没有对应18c的,但实际测试,在依赖包方面基本没区别。

    如果选用的是其他Linux,比如常用的RHEL,那就需要yum安装官方文档要求的依赖包了。

    2.4 各节点配置/etc/hosts

    编辑/etc/hosts文件:

    #public ip 192.168.1.40 db40 192.168.1.42 db42 #virtual ip 192.168.1.41 db40-vip 192.168.1.43 db42-vip #scan ip 192.168.1.44 db18c-scan #private ip 10.10.1.40 db40-priv 10.10.1.42 db42-priv

    2.5 各节点创建需要的用户和组

    创建group & user,给oracle、grid设置密码:

    groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid echo oracle | passwd --stdin oracle echo oracle | passwd --stdin grid

    我这里测试环境设置密码都是oracle,实际生产环境建议设置符合规范的复杂密码。

    2.6 各节点创建安装目录

    各节点创建安装目录(root用户):

    mkdir -p /u01/app/18.3.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/

    2.7 各节点系统配置文件修改

    内核参数修改:vi /etc/sysctl.conf

    实际上OEL在安装依赖包的时候也同时修改了这些值,以下参数主要是核对或是对RHEL版本作为参考:

    # vi /etc/sysctl.conf 增加如下内容: vm.swappiness = 1 vm.dirty_background_ratio = 3 vm.dirty_ratio = 80 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.panic_on_oops = 1 net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2

    修改生效:

    #sysctl -p /etc/sysctl.conf

    注:enp0s9和enp0s10是IPSAN专用的网卡,跟私网一样设置loose mode。

    #sysctl -p /etc/sysctl.d/98-oracle.conf net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2

    用户shell的限制:vi /etc/security/limits.d/99-grid-oracle-limits.conf

    oracle soft nproc 16384 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 grid soft nproc 16384 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768

    这里需要注意OEL自动配置的 /etc/security/limits.d/oracle-database-server-12cR2-preinstall.conf 并不包含grid用户的,可以手工加上。

    vi /etc/profile.d/oracle-grid.sh

    #Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi

    这个OEL中也没有自动配置,需要手工配置。

    2.8 各节点设置用户的环境变量

    第1个节点grid用户:

    export ORACLE_SID=+ASM1; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

    第2个节点grid用户:

    export ORACLE_SID=+ASM2; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

    第1个节点oracle用户:

    export ORACLE_SID=cdb1; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

    第2个节点oracle用户:

    export ORACLE_SID=cdb2; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;

    三、GI(Grid Infrastructure)安装

    Linux平台 Oracle 18c RAC安装指导:

    Part1:Linux平台 Oracle 18c RAC安装Part1:准备工作

    Part2:Linux平台 Oracle 18c RAC安装Part2:GI配置

    Part3:Linux平台 Oracle 18c RAC安装Part3:DB配置

    本文安装环境:OEL 7.5 + Oracle 18.3 GI & RAC

    三、GI(Grid Infrastructure)安装

    3.1 解压GI的安装包

    su - grid

    解压 GRID 到 GRID用户的$ORACLE_HOME下

    [grid@db40 grid]$ pwd /u01/app/18.3.0/grid [grid@db40 grid]$ unzip /tmp/LINUX.X64_180000_grid_home.zip

    3.2 安装配置Xmanager软件

    在自己的Windows系统上成功安装Xmanager Enterprise之后,运行Xstart.exe可执行程序,

    配置如下

    Session:db40

    Host:192.168.1.40

    Protocol:SSH

    User Name:grid

    Execution Command:/usr/bin/xterm -ls -display $DISPLAY

    点击RUN,输入grid用户的密码可以正常弹出命令窗口界面,即配置成功。

    当然也可以通过开启Xmanager - Passive,直接在SecureCRT连接的会话窗口中临时配置DISPLAY变量直接调用图形:

    export DISPLAY=192.168.1.31:0.0

    3.3 共享存储LUN的赋权

    vi /etc/udev/rules.d/12-dm-permissions.rules

    # MULTIPATH DEVICES # # Set permissions for all multipath devices ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

    配置重载生效:

    # udevadm control --reload # udevadm trigger

    3.4 使用Xmanager图形化界面配置GI

    Xmanager通过grid用户登录,进入$ORACLE_HOME目录,运行gridSetup配置GI

    $ cd $ORACLE_HOME $ ./gridSetup.sh

    其实从12cR2开始,GI的配置就跟之前有一些变化,18c也一样,下面来看下GI配置的整个图形化安装的过程截图:

    注:这里Public网卡这里用的enp0s3,ASM&Private这里用的enp0s8,enp0s9和enp0s10是模拟IPSAN用到的网卡,所以这里不使用它们。

    注:这里跟之前区别不大,我依然是选择3块1G的盘,Normal冗余作为OCR和voting disk。

    注:这里有一个新的存储GIMR的,我这里选择是外部冗余的一个40G大小的盘,这是12c新引入的概念,18c依然如此。

    注:这里检查出来的问题都需要认真核对,确认确实可以忽略才可以点击“Ignore All”,如果这里检测出缺少某些RPM包,需要使用yum安装好。

    注:执行root脚本,确保先在一节点执行完毕后,再在其他节点执行。

    第一个节点root执行脚本:

    [root@db40 tmp]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@db40 tmp]# /u01/app/18.3.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/18.3.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/18.3.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/db40/crsconfig/rootcrs_db40_2018-08-04_10-25-09AM.log 2018/08/04 10:25:34 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'. 2018/08/04 10:25:35 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 10:26:29 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 10:26:29 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'. 2018/08/04 10:26:29 CLSRSC-363: User ignored prerequisites during installation 2018/08/04 10:26:29 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'. 2018/08/04 10:26:35 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'. 2018/08/04 10:26:37 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'. 2018/08/04 10:26:53 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'. 2018/08/04 10:27:47 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'. 2018/08/04 10:27:54 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'. 2018/08/04 10:28:09 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'. 2018/08/04 10:28:10 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'. 2018/08/04 10:28:21 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'. 2018/08/04 10:28:21 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2018/08/04 10:30:29 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'. 2018/08/04 10:30:43 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2018/08/04 10:32:34 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'. 2018/08/04 10:32:47 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'db40' CRS-2672: Attempting to start 'ora.mdnsd' on 'db40' CRS-2676: Start of 'ora.mdnsd' on 'db40' succeeded CRS-2676: Start of 'ora.evmd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db40' CRS-2676: Start of 'ora.gpnpd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db40' CRS-2672: Attempting to start 'ora.gipcd' on 'db40' CRS-2676: Start of 'ora.cssdmonitor' on 'db40' succeeded CRS-2676: Start of 'ora.gipcd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db40' CRS-2672: Attempting to start 'ora.diskmon' on 'db40' CRS-2676: Start of 'ora.diskmon' on 'db40' succeeded CRS-2676: Start of 'ora.cssd' on 'db40' succeeded [INFO] [DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-180804AM103332.log for details. 2018/08/04 10:37:38 CLSRSC-482: Running command: '/u01/app/18.3.0/grid/bin/ocrconfig -upgrade grid oinstall' CRS-2672: Attempting to start 'ora.crf' on 'db40' CRS-2672: Attempting to start 'ora.storage' on 'db40' CRS-2676: Start of 'ora.storage' on 'db40' succeeded CRS-2676: Start of 'ora.crf' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db40' CRS-2676: Start of 'ora.crsd' on 'db40' succeeded CRS-4256: Updating the profile Successful addition of voting disk e234d69db62c4f41bff377eec5bed671. Successful addition of voting disk eb9d2950a5aa4f4cbfa46432f7c4f709. Successful addition of voting disk 84c44e2025be4fe3bf7d5a7a4049d4fd. Successfully replaced voting disk group with +OCRVT. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE e234d69db62c4f41bff377eec5bed671 (/dev/mapper/mpatha) [OCRVT] 2. ONLINE eb9d2950a5aa4f4cbfa46432f7c4f709 (/dev/mapper/mpathb) [OCRVT] 3. ONLINE 84c44e2025be4fe3bf7d5a7a4049d4fd (/dev/mapper/mpathc) [OCRVT] Located 3 voting disk(s). CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db40' CRS-2673: Attempting to stop 'ora.crsd' on 'db40' CRS-2677: Stop of 'ora.crsd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.storage' on 'db40' CRS-2673: Attempting to stop 'ora.crf' on 'db40' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'db40' CRS-2673: Attempting to stop 'ora.mdnsd' on 'db40' CRS-2677: Stop of 'ora.crf' on 'db40' succeeded CRS-2677: Stop of 'ora.storage' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'db40' CRS-2677: Stop of 'ora.drivers.acfs' on 'db40' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'db40' succeeded CRS-2677: Stop of 'ora.asm' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'db40' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'db40' CRS-2673: Attempting to stop 'ora.evmd' on 'db40' CRS-2677: Stop of 'ora.evmd' on 'db40' succeeded CRS-2677: Stop of 'ora.ctssd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'db40' CRS-2677: Stop of 'ora.cssd' on 'db40' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'db40' CRS-2673: Attempting to stop 'ora.gpnpd' on 'db40' CRS-2677: Stop of 'ora.gipcd' on 'db40' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'db40' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db40' has completed CRS-4133: Oracle High Availability Services has been stopped. 2018/08/04 10:42:19 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'db40' CRS-2672: Attempting to start 'ora.mdnsd' on 'db40' CRS-2676: Start of 'ora.mdnsd' on 'db40' succeeded CRS-2676: Start of 'ora.evmd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db40' CRS-2676: Start of 'ora.gpnpd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'db40' CRS-2676: Start of 'ora.gipcd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crf' on 'db40' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db40' CRS-2676: Start of 'ora.cssdmonitor' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db40' CRS-2672: Attempting to start 'ora.diskmon' on 'db40' CRS-2676: Start of 'ora.diskmon' on 'db40' succeeded CRS-2676: Start of 'ora.crf' on 'db40' succeeded CRS-2676: Start of 'ora.cssd' on 'db40' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'db40' CRS-2672: Attempting to start 'ora.ctssd' on 'db40' CRS-2676: Start of 'ora.ctssd' on 'db40' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'db40' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db40' CRS-2676: Start of 'ora.asm' on 'db40' succeeded CRS-2672: Attempting to start 'ora.storage' on 'db40' CRS-2676: Start of 'ora.storage' on 'db40' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db40' CRS-2676: Start of 'ora.crsd' on 'db40' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: db40 CRS-6016: Resource auto-start has completed for server db40 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2018/08/04 10:45:28 CLSRSC-343: Successfully started Oracle Clusterware stack 2018/08/04 10:45:28 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'. CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'db40' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'db40' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db40' CRS-2676: Start of 'ora.asm' on 'db40' succeeded CRS-2672: Attempting to start 'ora.OCRVT.dg' on 'db40' CRS-2676: Start of 'ora.OCRVT.dg' on 'db40' succeeded 2018/08/04 10:49:35 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'. [INFO] [DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-180804AM104944.log for details. 2018/08/04 10:55:24 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    执行成功后,在第二个节点root执行脚本:

    [root@db42 app]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@db42 app]# /u01/app/18.3.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/18.3.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/18.3.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/db42/crsconfig/rootcrs_db42_2018-08-04_11-09-32AM.log 2018/08/04 11:09:50 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'. 2018/08/04 11:09:50 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 11:10:38 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2018/08/04 11:10:38 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'. 2018/08/04 11:10:38 CLSRSC-363: User ignored prerequisites during installation 2018/08/04 11:10:39 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'. 2018/08/04 11:10:41 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'. 2018/08/04 11:10:42 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'. 2018/08/04 11:10:48 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'. 2018/08/04 11:10:48 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'. 2018/08/04 11:10:49 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'. 2018/08/04 11:10:52 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'. 2018/08/04 11:10:52 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'. 2018/08/04 11:10:58 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'. 2018/08/04 11:10:58 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'. 2018/08/04 11:11:01 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'. 2018/08/04 11:11:02 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' 2018/08/04 11:13:06 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'. 2018/08/04 11:13:10 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2018/08/04 11:14:48 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'. 2018/08/04 11:14:50 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db42' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'db42' CRS-2677: Stop of 'ora.drivers.acfs' on 'db42' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db42' has completed CRS-4133: Oracle High Availability Services has been stopped. 2018/08/04 11:15:18 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'db42' CRS-2672: Attempting to start 'ora.mdnsd' on 'db42' CRS-2676: Start of 'ora.evmd' on 'db42' succeeded CRS-2676: Start of 'ora.mdnsd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'db42' CRS-2676: Start of 'ora.gpnpd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'db42' CRS-2676: Start of 'ora.gipcd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.crf' on 'db42' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'db42' CRS-2676: Start of 'ora.crf' on 'db42' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'db42' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'db42' CRS-2672: Attempting to start 'ora.diskmon' on 'db42' CRS-2676: Start of 'ora.diskmon' on 'db42' succeeded CRS-2676: Start of 'ora.cssd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'db42' CRS-2672: Attempting to start 'ora.ctssd' on 'db42' CRS-2676: Start of 'ora.ctssd' on 'db42' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'db42' CRS-2676: Start of 'ora.crsd' on 'db42' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'db42' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db42' CRS-2676: Start of 'ora.asm' on 'db42' succeeded CRS-6017: Processing resource auto-start for servers: db42 CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'db42' CRS-2672: Attempting to start 'ora.ons' on 'db42' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'db42' succeeded CRS-2672: Attempting to start 'ora.asm' on 'db42' CRS-2676: Start of 'ora.ons' on 'db42' succeeded CRS-2676: Start of 'ora.asm' on 'db42' succeeded CRS-2672: Attempting to start 'ora.proxy_advm' on 'db40' CRS-2672: Attempting to start 'ora.proxy_advm' on 'db42' CRS-2676: Start of 'ora.proxy_advm' on 'db40' succeeded CRS-2676: Start of 'ora.proxy_advm' on 'db42' succeeded CRS-6016: Resource auto-start has completed for server db42 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2018/08/04 11:21:10 CLSRSC-343: Successfully started Oracle Clusterware stack 2018/08/04 11:21:11 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'. 2018/08/04 11:21:52 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'. 2018/08/04 11:22:06 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    root脚本成功执行完后继续安装:

    这段时间也很漫长,看来Oracle集群软件是越来越庞大了,老DBA们此时此刻有没有怀念11g甚至10g的时代呢?

    安装过程中可以跟踪安装日志:

    tail -20f /tmp/GridSetupActions2018-08-03_11-27-06PM/gridSetupActions2018-08-03_11-27-06PM.log

    注:日志中可以看到有一个阶段starting read loop消耗了1个多小时才进入到下个阶段,这很可能是有异常导致时间过长,但即使排除这个异常时间段依然还有将近2小时的时间。

    INFO: [Aug 4, 2018 2:04:42 PM] ... GenericInternalPlugIn: starting read loop. INFO: [Aug 4, 2018 3:25:37 PM] Completed Plugin named: mgmtca

    注:最后这个报错提示,查看日志发现是因为使用了一个scan ip的提示,可以忽略。

    至此GI配置完成。

    3.5 验证crsctl的状态

    crsctl stat res -t查看集群资源状态信息,看到18c又新出现一些资源了,看来DBA又有新东西需要学习了~

    [grid@db40 grid]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.chad ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 IDLE,STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db40 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db40 169.254.7.255 10.0.0 .40,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db40 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db40 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db40 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------

    crsctl stat res -t -init

    -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE db40 Started,STABLE ora.cluster_interconnect.haip 1 ONLINE ONLINE db40 STABLE ora.crf 1 ONLINE ONLINE db40 STABLE ora.crsd 1 ONLINE ONLINE db40 STABLE ora.cssd 1 ONLINE ONLINE db40 STABLE ora.cssdmonitor 1 ONLINE ONLINE db40 STABLE ora.ctssd 1 ONLINE ONLINE db40 ACTIVE:0,STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.drivers.acfs 1 ONLINE ONLINE db40 STABLE ora.evmd 1 ONLINE ONLINE db40 STABLE ora.gipcd 1 ONLINE ONLINE db40 STABLE ora.gpnpd 1 ONLINE ONLINE db40 STABLE ora.mdnsd 1 ONLINE ONLINE db40 STABLE ora.storage 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------

    3.6 测试集群的FAILED OVER功能

    节点2被重启,查看节点1状态:

    [grid@db40 trace]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ora.chad ONLINE ONLINE db40 STABLE ora.helper OFFLINE OFFLINE db40 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 volume /opt/oracle/r hp_images/chkbase is unmounted,STABLE ora.net1.network ONLINE ONLINE db40 STABLE ora.ons ONLINE ONLINE db40 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db40 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db40 169.254.7.255 10.0.0 .40,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE OFFLINE STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db40 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE INTERMEDIATE db40 FAILED OVER,STABLE ora.mgmtdb 1 ONLINE ONLINE db40 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db40 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db40 STABLE --------------------------------------------------------------------------------

    节点1被重启,查看节点2状态:

    [grid@db42 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE INTERMEDIATE db42 STABLE ora.OCRVT.dg ONLINE INTERMEDIATE db42 STABLE ora.chad ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db42 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db42 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db42 169.254.7.154 10.0.0 .42,STABLE ora.asm 1 ONLINE OFFLINE STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cvu 1 ONLINE ONLINE db42 STABLE ora.db40.vip 1 ONLINE INTERMEDIATE db42 FAILED OVER,STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db42 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db42 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db42 STABLE --------------------------------------------------------------------------------

    附:集群日志位置:

    [grid@db40 log]$ cd $ORACLE_BASE/diag/crs/db40/crs/trace [grid@db40 trace]$ pwd /u01/app/grid/diag/crs/db40/crs/trace [grid@db40 trace]$ tail -5f ocssd.trc 2018-08-04 17:35:16.507 : CSSD:758277888: clssgmMbrDataUpdt: Sending member data change to GMP for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:16.509 : CSSD:770635520: clssgmpcMemberDataUpdt: grockName HB+ASM memberID 17:2:1, datatype 1 datasize 4 2018-08-04 17:35:16.514 : CSSD:755123968: clssgmcpDataUpdtCmpl: Status 0 mbr data updt memberID 17:2:1 from clientID 1:39:2 2018-08-04 17:35:17.319 : CSSD:3337582336: clssnmSendingThread: sending status msg to all nodes 2018-08-04 17:35:17.321 : CSSD:3337582336: clssnmSendingThread: sent 4 status msgs to all nodes 2018-08-04 17:35:17.793 : CSSD:762750720: clssgmpcGMCReqWorkerThread: processing msg (0x7f2914038720) type 2, msg size 76, payload (0x7f291403874c) size 32, sequence 27970, for clientID 1:39:2 2018-08-04 17:35:18.424 : CSSD:758277888: clssgmMbrDataUpdt: Processing member data change type 1, size 4 for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:18.424 : CSSD:758277888: clssgmMbrDataUpdt: Sending member data change to GMP for group HB+ASM, memberID 17:2:1 2018-08-04 17:35:18.425 : CSSD:770635520: clssgmpcMemberDataUpdt: grockName HB+ASM memberID 17:2:1, datatype 1 datasize 4 2018-08-04 17:35:18.427 : CSSD:755123968: clssgmcpDataUpdtCmpl: Status 0 mbr data updt memberID 17:2:1 from clientID 1:39:2 2018-08-04 17:35:19.083 : CSSD:755123968: clssgmSendEventsToMbrs: Group GR+DB_+ASM, member count 1, event master 0, event type 6, event incarn 346, event member count 1, pids 31422-21167708, 2018-08-04 17:35:19.446 : CSSD:762750720: clssgmpcGMCReqWorkerThread: processing msg (0x7f2914038720) type 2, msg size 76, payload (0x7f291403874c) size 32, sequence 27972, for clientID 1:37:2 [grid@db40 trace]$ tail -20f alert.log Build version: 18.0.0.0.0 Build full version: 18.3.0.0.0 Build hash: 9256567290 Bug numbers: NoTransactionInformation Commands: Build version: 18.0.0.0.0 Build full version: 18.3.0.0.0 Build hash: 9256567290 Bug numbers: NoTransactionInformation 2018-08-04 18:02:57.013 [CLSECHO(3376)]ACFS-9327: Verifying ADVM/ACFS devices. 2018-08-04 18:02:57.058 [CLSECHO(3384)]ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'. 2018-08-04 18:02:57.115 [CLSECHO(3395)]ACFS-9156: Detecting control device '/dev/ofsctl'. 2018-08-04 18:02:58.991 [CLSECHO(3482)]ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf 2018-08-04 18:02:59.032 [CLSECHO(3490)]ACFS-9322: completed 2018-08-04 18:03:00.398 [OSYSMOND(3571)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 3571 2018-08-04 18:03:00.324 [CSSDMONITOR(3567)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 3567 2018-08-04 18:03:00.796 [CSSDAGENT(3598)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 3598 2018-08-04 18:03:01.461 [OCSSD(3621)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 3621 2018-08-04 18:03:03.374 [OCSSD(3621)]CRS-1713: CSSD daemon is started in hub mode 2018-08-04 18:03:23.052 [OCSSD(3621)]CRS-1707: Lease acquisition for node db40 number 1 completed 2018-08-04 18:03:29.122 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpathc; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:29.150 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpathb; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:31.087 [OCSSD(3621)]CRS-1605: CSSD voting file is online: /dev/mapper/mpatha; details in /u01/app/grid/diag/crs/db40/crs/trace/ocssd.trc. 2018-08-04 18:03:33.767 [OCSSD(3621)]CRS-1601: CSSD Reconfiguration complete. Active nodes are db40 db42 . 2018-08-04 18:03:35.311 [OLOGGERD(3862)]CRS-8500: Oracle Clusterware OLOGGERD process is starting with operating system process ID 3862 2018-08-04 18:03:35.833 [OCTSSD(3869)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 3869 2018-08-04 18:03:36.055 [OCSSD(3621)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation. 2018-08-04 18:03:39.797 [OCTSSD(3869)]CRS-2407: The new Cluster Time Synchronization Service reference node is host db42. 2018-08-04 18:03:39.810 [OCTSSD(3869)]CRS-2401: The Cluster Time Synchronization Service started on host db40. 2018-08-04 18:03:41.572 [CRSD(3956)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 3956 2018-08-04 18:03:57.242 [CRSD(3956)]CRS-1012: The OCR service started on node db40. 2018-08-04 18:03:59.550 [CRSD(3956)]CRS-1201: CRSD started on node db40.

    至此,12cR2的GI配置已全部完成。

    四、DB(Database)配置

    Linux平台 Oracle 18c RAC安装指导:

    Part1:Linux平台 Oracle 18c RAC安装Part1:准备工作

    Part2:Linux平台 Oracle 18c RAC安装Part2:GI配置

    Part3:Linux平台 Oracle 18c RAC安装Part3:DB配置

    本文安装环境:OEL 7.5 + Oracle 18.3 GI & RAC

    四、DB(Database)安装

    4.1 解压DB的安装包

    oracle用户登录,在$ORACLE_HOME下解压db包(18c的db也是直接解压到$ORACLE_HOME下,免安装):

    Starting with Oracle Database 18c, installation and configuration of Oracle Database software is simplified with image-based installation.

    [oracle@db40 ~]$ mkdir -p /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 ~]$ cd $ORACLE_HOME/ [oracle@db40 db_1]$ pwd /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 db_1]$ unzip /tmp/LINUX.X64_180000_db_home.zip

    4.2 DB软件配置

    打开Xmanager软件,Oracle用户登录,配置数据库软件。

    [oracle@db40 db_1]$ pwd /u01/app/oracle/product/18.3.0/db_1 [oracle@db40 db_1]$ export DISPLAY=192.168.1.31:0.0 [oracle@db40 db_1]$ ./runInstaller

    下面截取DB软件配置的过程如下:

    注:这里选择只安装软件,数据库后面创建好ASM磁盘组后再运行dbca创建。

    注:配置好ssh等价性。

    注:可以进行修复的,按提示执行脚本修复。

    我这里还有swap的问题,因为是测试环境资源有限,可以忽略,如果生产环境,强烈建议调整符合要求。

    如果还有其他的检查项未通过,则无论是生产还是测试环境,都不建议忽略,而应该整改符合要求为止。

    注:最后root用户按安装提示执行1个脚本,需要在各节点分别执行。

    至此,已完成DB软件的配置。

    4.3 ASMCA创建磁盘组

    打开Xmanager软件,grid用户登录,asmca创建ASM磁盘组

    [grid@db40 ~]$ export DISPLAY=192.168.1.31:0.0 [grid@db40 ~]$ asmca

    这个asmca调用图形等了几分钟才出来,首先映入眼帘的是鲜艳的18c配色图:

    然后正式进入asmca的界面:

    这里我先创建一个DATA磁盘组,一个FRA磁盘组,冗余选择external(生产如果选择external,底层存储必须已经做了RAID)。

    这里看到新创建的DATA和FRA磁盘组已经创建完成并成功mount。

    4.4 DBCA建库

    打开Xmanager软件,oracle用户登录,dbca图形创建数据库,数据库字符集我这里选择ZHS16GBK。

    下面是DBCA建库的过程截图:

    注:这里选择是否启用CDB,并定义CDB和PDB的名称。我选择启用CDB,并自动创建4个PDB,前缀名就叫PDB。

    注:这里我选择使用OMF。

    注:这里我原计划启用FRA,并设置路径为+FRA。因为空间不够,暂时不勾选,以后扩容后再调整。

    注:这里选择内存分配具体值,选择数据库的字符集,我这里没选择,字符集默认是AL32UTF8。需要根据实际情况修改。

    注:这里可以选择是否配置EM,我这里选择配置,如果你不需要,可以选择不配置。CVU一般也不配置,我这里学习目的选择配置。

    注:这里设置密码,我实验环境直接oracle,不符合规范,生产环境建议设置复杂密码。

    注:这里可以选择将创建数据库的脚本保存下来,根据你的需求,可选可不选。

    注:这里如果还有其他的检查未通过,则不能忽略。我这里是因为使用一个scan,对应报错可以忽略。

    注:这里是安装信息的概览,建议认真核实,如果有不对的还可以退回去改。确认无误后开始创建数据库。

    注:18c建库的时间也是长到让人崩溃,感觉以后DBA安装过程中可以提前下几个电影来边等边看了。

    至此,Oracle 18.3 RAC数据库已经创建成功。

    4.5 验证crsctl的状态

    grid用户登录,crsctl stat res -t 查看集群资源的状态,发现各节点的DB资源已经正常Open。

    [grid@db40 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.DATA.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.FRA.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.LISTENER.lsnr ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.MGMT.GHCHKPT.advm OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.MGMT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.OCRVT.dg ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.chad ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.helper OFFLINE OFFLINE db40 IDLE,STABLE OFFLINE OFFLINE db42 STABLE ora.mgmt.ghchkpt.acfs OFFLINE OFFLINE db40 STABLE OFFLINE OFFLINE db42 STABLE ora.net1.network ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.ons ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE ora.proxy_advm ONLINE ONLINE db40 STABLE ONLINE ONLINE db42 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE db42 STABLE ora.MGMTLSNR 1 ONLINE ONLINE db42 169.254.7.154 10.0.0 .42,STABLE ora.asm 1 ONLINE ONLINE db40 Started,STABLE 2 ONLINE ONLINE db42 Started,STABLE 3 OFFLINE OFFLINE STABLE ora.cdb.db 1 ONLINE ONLINE db40 Open,HOME=/u01/app/o racle/product/18.3.0 /db_1,STABLE 2 ONLINE ONLINE db42 Open,HOME=/u01/app/o racle/product/18.3.0 /db_1,STABLE ora.cvu 1 ONLINE ONLINE db42 STABLE ora.db40.vip 1 ONLINE ONLINE db40 STABLE ora.db42.vip 1 ONLINE ONLINE db42 STABLE ora.mgmtdb 1 ONLINE ONLINE db42 Open,STABLE ora.qosmserver 1 ONLINE ONLINE db42 STABLE ora.rhpserver 1 OFFLINE OFFLINE STABLE ora.scan1.vip 1 ONLINE ONLINE db42 STABLE --------------------------------------------------------------------------------

    oracle用户登录,sqlplus / as sysdba

    [oracle@db40 ~]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Aug 5 16:04:42 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL> select inst_id, name, open_mode from gv$database; INST_ID NAME OPEN_MODE ---------- --------- -------------------- 1 CDB READ WRITE 2 CDB READ WRITE SQL> show con_id CON_ID ------------------------------ 1 SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO 4 PDB2 READ WRITE NO 5 PDB3 READ WRITE NO 6 PDB4 READ WRITE NO SQL> alter session set container = pdb4; Session altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 6 PDB4 READ WRITE NO SQL> select name from v$datafile; NAME -------------------------------------------------------------------------------- +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/system.292.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/sysaux.293.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/undotbs1.291.983371593 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/undo_2.295.983372151 +DATA/CDB/72AB854658DD18D8E0532801A8C0CA21/DATAFILE/users.296.983372191 SQL>

    可以看到所有的资源均正常,至此,整个在OEL 7.5 上安装 Oracle 18.3 GI & RAC 的工作已经全部结束。

  • 相关阅读:
    Java 集合(静态导入)
    Java 集合 (Collections、Arrays)
    Java 异常
    Java 多态
    Java 继承

    内网服务器配置访问公网
    替换centos的原生yum源为阿里云yum源
    centos7安装杀毒软件ClamAV
    linux程序名称带devel跟不带的区别
  • 原文地址:https://www.cnblogs.com/buffercache/p/10113911.html
Copyright © 2020-2023  润新知