• bay——安装_Oracle 12C-RAC-Centos7.txt



    ★★★____★☆★〓〓〓〓→2019年6月26日10:29:42 bayaim-RAC ——搭建第4次
    VMware vSphere Client6.0

    ------------------------------------------------------------------------------------------
    数据库 实例名 节点名 VPN scan_IP IP linux 账号/密码 内存 SWAP缓存 硬盘

    "Oracle 12.1.0.2 RAC" scan_IP: 10.20.100.25
    racdb1 rac1
    10.20.100.21 rac1 --eth0 bayaim 10G 10G 450G
    10.20.100.23 rac1-vip --eth0:1
    192.168.100.21 rac1-priv --eth1
    racdb2 rac2
    10.20.100.22 rac2 --eth0 bayaim 10G 10G 450G
    10.20.100.24 rac2-vip --eth0:1
    192.168.100.22 rac2-priv --eth1

    ------------------------------------------------------------------------------------------


    机器1:
    root/bayaim
    oracle/oracle
    grid/grid

    oracle:
    统一设置sys,system,dbsnmp,sysman用户的密码为oracle
    sys/bayaim
    system/bayaim
    ASM密码为:bayaim
    ---------------------------------------------
    >>>>>>1. 安装centos 选择语言:english / U.S.English 防止乱码
    [oracle@testoracle database]$ export LANG=EN
    echo $LANG
    vi /etc/sysconfig/i18n

    echo LANG=zh_CN.gbk
    locale -a |grep en

    export LANG=en_US
    ---------------------------------------------
    >>>>>>2.
    [root@rac2 ~]# cat /etc/hosts
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    ----------------------------------------
    # rac1
    10.20.100.21 rac1
    10.20.100.23 rac1-vip
    192.168.100.21 rac1-priv

    # rac2
    10.20.100.22 rac2
    10.20.100.24 rac2-vip
    192.168.100.22 rac2-priv

    # scan-ip
    10.20.100.25 scan-cluster

    ===============================================>>>>>>>>>>>>>
    >>>>>>3.一、安装操作系统
    >>node1:
    ---------------------------------------
    2核
    机器内存选择2G
    2块网卡
    硬盘60G
    选择启动前编辑
    选择ISO 镜像 OL rehat5.5
    回车,进入安装系统
    SKIP 跳过光盘检查
    NEXT
    ENGLISH
    US.english
    选择new
    swap 交换分区4G:4000M
    里面2块网卡:需要手动配置 、 去掉Enable6
    输入主机名:node1.localdomain
    输入网关:172.16.15.254
    next 跳过DNS
    亚洲上海时区
    口令设置密码
    next
    选择安装软件包
    gonme.....选择X tools
    虚拟化集群都不要

    ----------------------------------------------
    >>>>>>4.磁盘规划————因为是测试环境:
    冗余策略 卷划分及大小说明
    OCRVOTING
    OCR1 200M
    OCR2 200M
    OCR3 200M

    DATA
    Data1 40G
    Data2 4G
    Data3 2G

    配置FLASH磁盘组,用于闪回区
    FLSAH 3G
    FLSAH 3G
    FLSAH 3G

    ----------------------------------------------
    第二个虚拟机:
    第一块磁盘添加:
    1.创建:“使用现有的虚拟磁盘” —— 选择虚拟机1中的磁盘路径 —— 更改”虚拟机2的虚拟设备节点: SCSI 1:0“ —— 选择模式 ”独立“
    2.变更虚拟机2中,会自动出现 SCSI控制器,“更改类型”—— 改为 “LSI Logic SAS” (我的机器是"并行")
    其他的磁盘添加依次按照上述(注意,每3个磁盘改变一下SCSI 1:0、SCSI 2:0、SCSI 3:0)

    >>>>>>4.0 关闭沙盒! 关闭防火墙 ==============================================

    SELINUX
    vi /etc/sysconfig/selinux
    (本来是:SELINUXTYPE=targeted )
    SELINUX=disabled

    [root@ums-data mysql]# setenforce 0 (关闭沙盒!)
    [root@ums-data mysql]# getenforce
    Disabled

    >>>bayaim1.0------关闭防火墙:>>>>>>>>

    防火墙
    chkconfig iptables off
    chkconfig --level 35 NetworkManager off
    iptables -F (清空防火墙默认规则)

    #增加至sysV服务 : chkconfig --add mysqld
    #开机自启动 : chkconfig --level 35 mysqld on
    # chkconfig --list | grep mysql

    systemctl status firewalld.service //查看防火墙状态:
    systemctl stop firewalld //关闭:
    systemctl stop firewalld.service (停止防火墙,这是CentOS7的命令)
    systemctl start firewalld //开启:
    systemctl disable firewalld //禁止开机启动


    打开、关闭GNOME 桌面:
    [root@localhost ~]# vi /etc/inittab
    3 -- 为命令行
    5 -- 为桌面
    然后重启系统后
    #init 3


    2.0 注意:修改了主机名后========================
    [root@DB-mysql1-c mysql]# vi /etc/hosts

    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.1.1.11 DB-manger2

    vi /etc/sysconfig/network

    ---修改hostname
    NETWORKING=yes
    HOSTNAME=rac1

    hostname rac1
    hostname

    3.0网卡:=======================================

    systemctl start NetworkManager
    systemctl restart network //重启网卡-----------

    // 修改网卡---------
    [root@rac1 network-scripts]# pwd
    /etc/sysconfig/network-scripts
    [root@rac1 network-scripts]# cat ifcfg-ens256
    [root@dns~]# cat /etc/resolv.conf

    [root@dns ~]# chattr +i /etc/resolv.confv


    【群主】一命不允许低于35(774269490) 13:11:56
    日狗了
    新宠真的和电魂同时使用
    既有新宠磁场
    又有电魂磁场
    哈哈

    增加路由规则:========================>>>>>>>>>>>>>>>

    route add -net 172.28.22.0 netmask 255.255.255.0 gw 172.28.20.254 dev eth1
    route add -net 172.28.22.0 netmask 255.255.255.0 gw 172.28.20.254 dev eth2

    (4)查看绑定情况

    /sbin/ifconfig eth0:1 192.168.66.150 netmask 255.255.255.0 up
    /sbin/ifconfig eth0:1 192.168.66.149 netmask 255.255.255.0 down
    ip add

    上网方法:
    [root@dns~]# cat /etc/resolv.conf
    search localdomain
    nameserver 114.114.114.114

    4.0 yum 挂载光盘================================================================

    挂载cdrom
    # mkdir /mnt
    # mount /dev/cdrom /mnt
    或者
    # mount /dev/cdrom /media/cdrom

    安装本地光盘yum:

    1.检查是否安装了yum
    # rpm -qa |grep yum
    redhat一般都默认安装了yum。
    2.
    [root@rac1 Server]# vi /etc/yum.repos.d/rhel-source.repo

    [rhel-source]
    name=localyum
    baseurl=file:///mnt/cdrom/Server
    enabled=1
    gpgcheck=0
    gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release

    #这里指向了Server目录, 如果还要软件, 则还需建立指向Cluster,ClusterStorage,VT的baseurl.

    4.0 赵环宇RAC:
    [root@DB-mysql1-z yum.repos.d]# cd /etc/yum.repos.d
    [root@DB-mysql1-z yum.repos.d]# vi my.repo
    [name]
    name=my new repo
    baseurl=file:///mnt
    enabled=1
    gpgcheck=0

    3.清除缓存
    [root@kangvcar ~]# yum clean all
    [root@kangvcar ~]# yum makecache //把yum源缓存到本地,加快软件的搜索好安装速度
    [root@kangvcar ~]# yum list //列出了3780个包

    4.安装测试

    #yum -y install gcc

    ------------------------------------------------------------------
    注意:asm磁盘如果用了asmlib就不能再设置成裸设备,不然在创建asm磁盘组的时候rac2的实例挂载磁盘会失败,
    要么就用裸设备,不用asmlib把其他2块OCR磁盘和3块表决盘配置成裸设备,这5个每块均100M
    设置asm磁盘,把3块10G的磁盘映射成VOL1、VOL2、VOL3,以后创建ASM磁盘组,存放数据文件

    -------------问题四:【更改字符集】-------------------------
    ./runInstaller
    图形化出现乱码:

    echo $LANG
    vi /etc/sysconfig/i18n
    echo LANG=zh_CN.gbk
    locale -a |grep en
    export LANG=en_US

    #仅节点1执行
    $export LANG=en_US


    规划需要的共享磁盘个数及大小
    [root@rac1 home]# fdisk -l | grep /dev/sd
    [root@localhost ~]# lsblk

    3. 配置用户及用户组

    groupadd oinstall
    groupadd dba
    groupadd oper
    groupadd asmadmin
    groupadd asmoper
    groupadd asmdba
    useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid
    echo "grid" | passwd --stdin grid
    useradd -u 1101 -g oinstall -G dba,oper,asmdba,asmadmin -d /home/oracle -s /bin/bash -c "Oracle Software Owner" oracle
    echo "oracle" | passwd --stdin oracle
    ============================================================================================
    (注意:两边配置一定要一致!!后面安装grid校验就会出错的)

    useradd -g oinstall -G dba,oper,asmdba,asmadmin oracle
    useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid
    echo oracle | passwd --stdin oracle
    echo grid | passwd --stdin grid

    usermod -u 1100 grid
    usermod -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "grid Infrastructure Owner" grid

    修改用户的组信息:要不以后找不到ASM组:
    usermod -a -G dba,oper,asmdba,asmadmin oracle
    usermod -a -G asmadmin,asmdba,asmoper grid

    ============================================================================================


    4. 配置 grid 和 oracle 用户的配置文件

    mkdir -p /u01/app/oracle
    mkdir -p /u01/app/grid
    mkdir -p /u01/app/11.2.0/grid
    chown -R grid:oinstall /u01
    chown -R oracle:oinstall /u01/app/oracle
    chmod -R 775 /u01

    ============================================================================================
    ----------Oracle User----------
    文件是:vi /home/oracle/.bash_profile

    export PS1="`/bin/hostname -s`-> "
    export TMP=/tmp
    export TMPDIR=$TMP
    export ORACLE_SID=racdb1
    export ORACLE_UNQNAME=racdb
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
    export ORACLE_TERM=xterm
    export PATH=$PATH:$ORACLE_HOME/bin
    export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    export PATH=$PATH:$ORACLE_HOME/bin
    export EDITOR=vi
    export LANG=en_US
    export NLS_LANG=american_america.AL32UTF8
    export PATH
    umask 022


    生效:
    source .bash_profile

    ============================================================================================
    ----------GRID User----------
    文件是:vi /home/grid/.bash_profile

    export PS1="`/bin/hostname -s`-> "
    export TMP=/tmp
    export TMPDIR=$TMP
    export ORACLE_SID=+ASM1
    export ORACLE_BASE=/u01/app/grid
    export ORACLE_HOME=/u01/app/11.2.0/grid
    export ORACLE_TERM=xterm
    export NLS_DATE_FORMAT='yyyy/mm/dd hh24:mi:ss'
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    export PATH=$PATH:$ORACLE_HOME/bin
    export EDITOR=vi
    export LANG=en_US
    export NLS_LANG=american_america.AL32UTF8
    export PATH
    umask 022

    生效:
    source .bash_profile

    ============================================================================================
    [oracle@localhost ~]$ vi /etc/security/limits.conf
    在文件最下方输入以下内容:
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    grid soft nproc 2047
    grid hard nproc 16384
    grid soft nofile 1024
    grid hard nofile 65536

    问题:这一步忘记做会报:
    Hard limits for "maximum open file descriptors"

    ============================================================================================
    [oracle@localhost ~]$ vi /etc/pam.d/login

    在文件最下方输入以下内容:
    session required /lib64/security/pam_limits.so
    #session required /lib/security/pam_limits.so
    session required pam_limits.so

    注意:因为是64位的系统,所以一定要写/lib64/security/pam_limits.so
    如果写成/lib/security/pam_limits.so(这是32位系统的设置),就会在虚拟机本地的字符界面模式下无法
    登录

    ============================================================================================
    [oracle@localhost ~]$ vi /etc/profile

    if [ $USER = "oracle" ]||[ $USER = "grid" ]; then
    if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536
    else
    ulimit -u 16384 -n 65536
    fi
    fi

    ============================================================================================
    [root@jssnode1Server]# vi /etc/sysctl.conf
    将下列内容加入该文件:

    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmall = 8160280
    kernel.shmmax = 33424509440
    kernel.shmmni = 4096
    kernel.sem =250 32000 100 128
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 4194304
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048576
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304

    【使内核参数立即生效】
    [root@linuxserv7 ~]#sysctl -p

    =================================================================
    三、停止ntp服务,11gR2新增的检查项,配置集群时间同步服务

    [root@node1 ~]# service ntpd status
    [root@note1 ~] #service ntpd stop
    [root@note1 ~] #chkconfig --level 2345 ntpd off
    [root@rac2 ~]# rm -rf /etc/ntp.conf
    [root@rac1 ~]# chkconfig ntpd --list

    节点二和节点三上执行相同的命令,卸载NTP
    在集群安装完后,要确认ctssd是否处于活动状态
    [grid@note1 ~] #crsctl check ctss


    问题1:
    [root@rhel5 ~]# iscsiadm -m discovery -t st -p 192.168.0.10 ##用iscsiadm探测openfiler新添加的iscsi共享卷
    -bash: iscsiadm: command not found
    [root@rhel5 ~]# mount /dev/cdrom /media --加载光盘,安装iscsi-initiator rpm包

    解决:
    [root@rac1 Packages]# pwd
    /mnt/cdrom/Packages
    [root@rac1 Packages]# yum install iscsi* #iscsi-initiator-utils-6.2.0.868-0.18.el5.i386.rpm

    [root@iscsi-asm~] # rpm –ivh scsi-target-utils*.rpm

    // RedHat 6在光盘Packages目录下
    // RedHat 5在光盘ClusterStorage目录下

    默认情况下,iscsi发起方和目标方之间通过端口3260连接。假设已知iscsi的目标方IP是192.168.1.1,运行下列命令:
    # chkconfig iscsi on
    # chkconfig iscsi --list (查看ISCSI启动状态)
    # chkconfig --list |grep iscsi ##检测所有相关的iscsi服务状态
    # service iscsi status
    # service iscsi start ##启动iscsi服务


    # iscsiadm -m discovery -t sendtargets -p 10.20.4.215:3260

    2、挂载ISCSI磁盘

    A、节点一note1 上: --nodeps --force

    [root@note1 ~] # rpm -ivh iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm --nodeps --force
    [root@note1 ~] # service iscsid restart //重启iscsi服务
    [root@note1 ~] # chkconfig --level 2345 iscsid on //设置开机自启动
    [root@note1 ~] # chkconfig --list iscsid //查看自启动项
    [root@note1 ~] # iscsiadm –m node –p 172.16.1.20 –l //登录iscsi存储

    B、节点二note2 上:
    [root@note2 ~] # rpm –ivh iscsi-initiator-utils*.rpm
    [root@note2 ~] # service iscsid restart //重启iscsi服务
    [root@note2 ~] # chkconfig --level 2345 iscsid on //设置开机自启动
    [root@note2 ~] # chkconfig --list iscsid //查看自启动项
    [root@note2 ~] # iscsiadm –m node –p 172.16.1.20 –l //登录iscsi存储

    yum list | grep lsscsi
    yum list | grep iscsi
    yum install iscsi*
    yum -y install iscsi-initiator-utils


    [二]启动iscsi服务
    # systemctl start iscsi
    [三]设置开机启动服务
    # systemctl enable iscsi
    systemctl status iscsi

    在rhel7中使用lsscsi --scsi_id 命令返回所有scsi设备的scsi identifier

    [root@localhost etc]# lsscsi --scsi_id


    ============================================================================================
    2.14.1 格式化共享磁盘
    以root用户分别在两个节点上执行fdisk命令,查看现有硬盘分区信息:

    node1:
    [root@node1 ~]# fdisk -l

    node2:
    [root@node2 ~]# fdisk -l
    root用户在node1上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde这4块盘
    [root@node1 ~]# fdisk /dev/sdb
    n 表示新建1个分区;
    p 表示分区类型选择为primary partition 主分区;
    1 表示分区编号从1开始;
    500 起始、终止柱面选择默认值,即1和500;
    w 表示将新建的分区信息写入硬盘分区表。

    [root@node1 ~]#
    重复上述步骤②,以root用户在node1上分别格式化其余3块磁盘:
    格式化完毕之后,在node1,node2节点上分别看到下述信息:


    #仅节点1执行
    [grid@cheastrac01:~]$ export LANG=en_US

    规划需要的共享磁盘个数及大小

    fdisk -l | grep /dev/sd
    lsblk


    8.配置UDEV:

    以下操作只需要2个节点都执行 ------------------------------------

    问题:
    VMware 中使用 scsi_id 查询磁盘UUID

    方法如下:
    1、在虚拟机关闭以后,进入虚拟机的目录
    2、用文本编辑器修改vmx文件,在vmx文件中任意位置(通常在最后)添加如下行:
    disk.EnableUUID = "TRUE"
    3、重新启动虚拟机,此时可以正确获取SCSI ID

    ============================================================================================

    建议使用脚本将所有磁盘的UUID输出到x.log,然后使用列编辑搞定所有asm磁盘
    cd /bai
    vi test.sh
    新建这个脚本,获得UUID:

    #!/bin/sh
    for i in b c d e f g h i j;
    do
    echo "KERNEL=="sd$i",SUBSYSTEM=="block", PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="`/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`", SYMLINK+="asm-disk$i",OWNER="grid", GROUP="asmadmin",MODE="0660""
    done

    SUBSYSTEM=="block"


    #!/bin/sh
    for i in b c d e f g h i j
    do
    echo "KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`", NAME="asm-disk$i", OWNER="grid", GROUP="asmadmin", MODE="0660""
    done

    chmod 755 test.sh
    ./test.sh > x.log


    KERNEL=="sdb1",SUBSYSTEM=="block",PROGRAM=="/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="36000c29098f786549d5d024e01626b87",SYMLINK+="asm-diskb1",OWNER="grid",GROUP="asmadmin",MODE="0660"

    redhat 7.x命令为:

    /usr/lib/udev/scsi_id -g -u /dev/sdb
    /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb

    redhat 6.x命令为:
    //获取磁盘UUID

    /sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb 或 #/sbin/scsi_id -g -u /dev/sda

    查看硬盘UUID:
    lsblk
    ls -la /dev/disk/by-id
    ls -l /dev/disk/by-uuid
    blkid /dev/sda5

    系统启动盘绑定 更改:==============>>>>>>>
    vi /etc/grub.conf
    boot=/dev/sda


    #添加记录到/etc/scsi_id.config
    编辑/etc/scsi_id.config文件,如果该文件不存在,则创建该文件,添加如下行:
    [root@rac1 dev]# echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config

    创建 rules 文件:
    # cd /etc/udev/rules.d
    # vi /etc/udev/rules.d/99-oracle-asmdevices.rules

    把 x.log中 信息复制到99-oracle-asmdevices.rules 下面

    /sbin/udevadm control --reload-rules
    systemctl restart systemd-udev-trigger.service
    udevadm trigger
    systemctl restart systemd-udevd.service
    partprobe
    ls -l /dev/asm*


    4.重启系统

    systemctl reboot

    shutdown -r now
    reboot


    重载 UDEV:
    sudo /etc/init.d/udev-post reload
    sudo udevcontrol reload_rules

    rac1启动UDEV:

    #start_udev
    或者
    #/sbin/start_udev
    Starting udev: [ OK ]


    [root@rac1 rules.d]# ls -l /dev/asm*
    0 brw-rw---- 1 grid asmadmin 8, 48 Apr 30 14:12 /dev/asm-diskc
    0 brw-rw---- 1 grid asmadmin 8, 64 Apr 30 14:12 /dev/asm-diskd
    0 brw-rw---- 1 grid asmadmin 8, 80 Apr 30 14:12 /dev/asm-diske
    0 brw-rw---- 1 grid asmadmin 8, 96 Apr 30 14:12 /dev/asm-diskf
    0 brw-rw---- 1 grid asmadmin 8, 112 Apr 30 14:12 /dev/asm-diskg
    0 brw-rw---- 1 grid asmadmin 8, 128 Apr 30 14:12 /dev/asm-diskh
    0 brw-rw---- 1 grid asmadmin 8, 144 Apr 30 14:12 /dev/asm-diski
    0 brw-rw---- 1 grid asmadmin 8, 160 Apr 30 14:12 /dev/asm-diskj
    0 brw-rw---- 1 grid asmadmin 8, 176 Apr 30 14:12 /dev/asm-diskk

    传输到rac2上:

    [root@rac1 rules.d]# scp /etc/udev/rules.d/99-oracle-asmdevices.rules 10.20.4.216:/etc/udev/rules.d/
    root@10.20.4.216's password:
    99-oracle-asmdevices.rules 100% 1945 1.9KB/s 00:00
    You have new mail in /var/spool/mail/root

    rac2启动UDEV
    #/sbin/start_udev
    [root@rac2 rules.d]# ls -l /dev/asm*

    =======================================================================================================================
    二.安装 Grid
    使用 X manager 连接到节点 1 上,设置 DISPLAY。

    节点2准备工作
    我们已经在node1完成基本准备配置工作,在node2上重复上述2.2到2.10节中准备工作,以完成节点2的准备工作。
    说明:2.3节配置SCAN IP已在节点2上完成,可忽略。2.4节中需要修改对应的环境变量。

    问题:<<<<<配置oracle,grid 用户SSH对等性>>>>>>>>>
    配置oracle用户对等性

    10.1.1.21 Woyee@121
    10.1.1.22 Woyee@122

    node1:

    [root@rac1 ~]# su - oracle
    rac1-> mkdir ~/.ssh
    rac1-> chmod 700 ~/.ssh
    rac1-> ls -al
    rac1-> ssh-keygen -t rsa (连续按三下回车)
    rac1-> ssh-keygen -t dsa (连续按三下回车)

    node2:

    [root@rac1 ~]# su - oracle
    rac2-> mkdir ~/.ssh
    rac2-> chmod 700 ~/.ssh
    rac2-> ls -al
    rac2-> ssh-keygen -t rsa (连续按三下回车)
    rac2-> ssh-keygen -t dsa (连续按三下回车

    返回节点1:

    rac1-> cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
    rac1-> cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
    rac1->
    oracle@rac2's password:
    rac1-> ssh rac2 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
    oracle@rac2's password:
    rac1-> scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
    oracle@rac2's password:

    验证oracle SSH对等性:
    在node1,node2两个节点上分别执行下述命令,第一次执行时需要口令验证:
    ssh rac1 date
    ssh rac2 date
    ssh rac1-priv date
    ssh rac2-priv date
    ssh rac1-vip date
    ssh rac2-vip date

    返回节点1:

    rac1-> ssh rac2-vip date
    Tue Apr 9 19:37:06 CST 2019
    rac1-> ssh rac1-vip date
    Tue Apr 9 19:37:07 CST 2019
    rac1-> ssh rac1-priv date
    Tue Apr 9 19:37:14 CST 2019
    rac1-> ssh rac2-priv date
    Tue Apr 9 19:37:18 CST 2019
    rac1-> ssh rac2 date
    Tue Apr 9 19:37:22 CST 2019
    rac1-> ssh rac1-vip date
    Tue Apr 9 19:37:25 CST 2019
    rac1->
    返回节点2:
    也要测试:

    ============================================================================================

    Nothing to do
    [root@rac2 network-scripts]# systemctl restart sshd


    至此,Oracle用户SSH对等性配置完成!重复上述步骤,以grid用户配置其对等性。
    <<<<<配置oracle,grid 用户SSH对等性>>>>>>>>>
    [root@rac1 ~]# su - grid
    node1:
    node2:
    重复上述步骤,

    格式化共享磁盘-------------------------------------------
    以root用户分别在两个节点上执行fdisk命令,查看现有硬盘分区信息:

    node1:
    [root@node1 ~]# fdisk -l

    node2:
    [root@node2 ~]# fdisk -l
    root用户在node1上格式化/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde这4块盘
    [root@node1 ~]# fdisk /dev/sdb
    n 表示新建1个分区;
    p 表示分区类型选择为primary partition 主分区;
    1 表示分区编号从1开始;
    500 起始、终止柱面选择默认值,即1和500;
    w 表示将新建的分区信息写入硬盘分区表。

    [root@node1 ~]#
    重复上述步骤②,以root用户在node1上分别格式化其余3块磁盘:
    格式化完毕之后,在node1,node2节点上分别看到下述信息:


    #仅节点1执行
    [grid@cheastrac01:~]$ export LANG=en_US

    规划需要的共享磁盘个数及大小
    [root@rac1 home]# fdisk -l | grep /dev/sd
    [root@localhost etc]# lsblk

    格式化ASM 裸设备:
    [root@rac1 utl]# ll /dev/asm*
    [root@rac1 ~]# dd if=/dev/zero of=/dev/asm-diskb bs=1M count=256


    安装完毕后,执行 rpm -qa|grep asm确认是否安装成功。

    [root@rac1 trace]# ls *.log
    alert_+ASM1.log
    [root@rac1 trace]# pwd
    /u01/app/grid/diag/asm/+asm/+ASM1/trace

    查看CRS日志

    [root@rac1 ~]# find / -name crsd.log
    /u01/app/11.2.0/grid/log/rac1/crsd/crsd.log

    ============================================================================================
    2.5 解压安装介质
    node1:

    Oracle Database(includes Oracle Database和Oracle RAC)安装数据库至少需要这两个安装包:

    p13390677_112040_Linux-x86-64_1of7.zip
    p13390677_112040_Linux-x86-64_2of7.zip

    Oracle Grid Infrastructure(包括Oracle ASM、Oracle Clusterware和Oracle Restart):
    p13390677_112040_Linux-x86-64_3of7.zip

    注意:这里的3个软件包均是来源于MetaLink网站,其版本均是目前Oracle 11g的最新版本,11.2.0.3.0。
    如果没有MetaLink账号的话,也可以从oracle官方网站免费获取11.2.0.1.0的版本软件来进行安装和配置。

    我们通过下述命令来解压上述3个压缩软件包:

    [root@rac1 bai]# unzip p13390677_112040_Linux-x86-64_1of7.zip
    [root@rac1 bai]# unzip p13390677_112040_Linux-x86-64_2of7.zip
    [root@rac1 bai]# unzip p13390677_112040_Linux-x86-64_3of7.zip
    [root@rac1 bai]# du -sh database/
    2.5G database/
    [root@rac1 bai]# du -sh grid/
    1.3G grid/

    为便于将来安装软件,分别将其move到oracle用户和grid用户的家目录:
    chown -R grid:oinstall /bai/grid/
    chown -R oracle:oinstall /bai/database/
    mv database/ /home/oracle/
    mv grid/ /home/grid/

    ============================================================================================

    2.6 安装前预检查配置信息
    yum -y install unixODBC*
    yum -y install xorg-x11-apps
    yum -y install libXp*
    yum -y install pdksh

    rpm -ivh compat-lib* --nodeps --force >>>>>强制安装的方法
    [root@rac1 bai]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm --nodeps --force
    [root@rac1 bai]# rpm -ivh cvuqdisk-1.0.9-1.rpm

    安装依赖包,配好yum装吧

    yum install -y unixODBC* xorg-x11-apps libXp* pdksh binutils compat-libstdc++-33 glibc ksh libaio libgcc libstdc++ make compat-libcap1 gcc gcc-c++ glibc-devel libaio-devel libstdc++-devel sysstat elfutils-libelf-devel

    yum install binutils compat-db compat-libstdc++-33 compat-libstdc++-296 compat-gcc-34-c++ compat-gcc-34 control-center elfutils-libelf elfutils-libelf-devel elfutils-libelf-devel-static gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers gnome-screensaver kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel libXp make numactl-devel openmotif openmotif22 pdksh rsh setarch sysstat unixODBC unixODBC-devel xorg-x11 xorg-x11-apps libcap

    ============================================================================================

    图形界面grid用户运行
    #su - grid
    node1-> ./runInstaller


    在安装 GRID之前,建议先利用CVU(Cluster Verification Utility)检查 CRS的安装前环境。

    使用 CVU 检查CRS的安装前环境:
    # su – grid
    node1-> cd grid/
    node1-> ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
    (注意:这个里面是节点的意思,不是实例)

    Please run the following script on each node as "root" user to execute the fixups:
    '/tmp/CVU_11.2.0.3.0_grid/runfixup.sh'
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    node1->
    从上面的预检查结果中,可以看到不成功,其实错误的原因是grid用户不属于dba组!不过,Oracle自动给我们提供的修复的脚本,根据上述提示,分别以root用户在两个节点上执行/tmp/CVU_11.2.0.3.0_grid/runfixup.sh脚本来修复。


    直到此步骤,我们的安装环境已经完全准备OK!!!

    大家在使用客户端连接linux的时候,secureCRT是个非常棒的工具,但是由于不支持图形模式,在一些需要图形界面
    的时候就很麻烦,其实有个比较简单的解决办法,就是使用xmanager的passive.
    如果提示“Warning: Missing charsets in String to FontSet conversion”不需理会,只是字符转换问题。在执行netca的时候,提前设置

    -----------------------------------------------------------------------------------

    export LANG=en_US 设置为英文环境即可。

    bayaim: 打开:Xmanager - Passive 后台运行着


    查看本地"CMD"——IPconfig: IPv4 地址 . . . . . . . . . . . . : 172.28.22.16

    rac1-> source .bash_profile
    rac1-> export DISPLAY=172.28.22.16:0.0
    rac1-> export LANG=en_US
    rac1-> xhost +
    rac1-> xclock
    rac1-> ./runInstalle

    ---------------------------------------------------------------------------------
    注意:
    在 11gr2 中
    external >= 1
    normal >=3
    high >=5

    安装步骤细节:(网上搜不到的 ,bayaim~!!!)=-========
    1/8. 选择“第一个”: 安装和配置cluster
    2/8. 选择“第二个”:高级安装
    3/8. 选择“english”
    4/16. 修改1.cluster name 改成: scan-cluster
    修改2.scan name 改成: scan-cluster
    修改3.取消 configure GNS
    6/16. 点击 “ADD” 添加 1.hostanme 为: rac2
    2.VIP name 为:rac2-vip
    点击SSHconectivity ,输入grid用户密码:grid 再点击 "test" 测试是否shh 互信可通
    (注意不要相信Oracle软件,要自己测试)
    7/16. 确定用哪一个网卡,IP段位,next
    8/16. 选择“第一个”: 自动ASM
    9/16. 配置asm:1.输入名字:OCR , normal, 1M
    2.点击“change discovery path” 输入: /dev/asm* 点击回车
    3.确定ASM 磁盘都是 : 磁盘状态必须是(候选)
    10/16. 注意:
    ASM的SYS、ASMSNMP用户配置为相同的口令,并输入口令 : bayaim ,点击“YES”
    11/17. 选择“第二个”: 不用IPM
    12/17. 修改1.OSDBA 选择:asmdba
    修改2.OSOPER 选择:asmoper
    修改3.OSASM 选择:asmadmin
    13/17. 默认: 指定$ORACLE_BASE 和 $GRID_BASE 不用动。下一步
    15/18. 默认: 下一步
    下面就是检查是否缺少包~!!!!
    再运行root.sh

    -----------------------------------------------------------------------------------
    问题1:
    This task verifies cluster time synchronization on clusters that use Network Time Protocol (NTP).?(more details)

    解决办法:
    # service ntpd stop
    Shutting down ntpd: [ OK ]
    # chkconfig --level 2345 ntpd off
    # rm -rf /etc/ntp.conf

    节点二和节点三上执行相同的命令,卸载NTP

    在集群安装完后,要确认ctssd是否处于活动状态
    #crsctl check ctss
    -----------------------------------------------------------------------------------

    问题1:
    Device Checks for ASM
    This is a pre-check to verify if the specified devices meet the requirements for configuration through the Oracle Universal Storage Manage
    解决办法: 因未配置DNS,此错误可以忽略。忽略报错会弹窗,问是否忽略错误,确认就好

    -----------------------------------------------------------------------------------
    问题2:
    This is a prerequisite condition to test whether sufficient total swap space is available on system

    解决办法:
    4) 修改swap 分区:------------

    以下操作需要root权限。

    #cd /usr/
    #mkdir swap
    #dd if=/dev/zero of=swapfile bs=2G count=4
    dd if=/dev/zero of=swapfile bs=2M count=1024

    这条命令从硬盘里分出一个 2×8G 大小的空间,挂在swapfile上。

    #mkswap swapfile

    构建swap格式于/usr/swap/swapfile 上

    #swapon swapfile

    激活swapfile ,加入到swap分区中。

    # vi /etc/fstab
    在/etc/fstab文件中加入下面这样一行:

    /usr/swap/swapfile swap swap defaults 0 0

    bs=bytes:同时设置读入/输出的块大小为bytes个字节。
    count=blocks:仅拷贝blocks个块,块大小等于ibs指定的字节数。

    在所有节点上安装完cvuqdisk-1.0.9-1软件后,重新执行预检查,不再有警告信息

    ------------------------------------------------------------------------
    问题3:
    准备作为CRS磁盘组的3个磁盘都是200M,选择CRS对应的磁盘组报如下错误
    INS-30515: Insufficient space available in the selected disks.
    原因:Insufficient space available in the selected Disks. At least, string MB of free space is required.
    所选的磁盘空间不足。

    解决办法:改成3个1G的磁盘解决问题。

    =================================================================================

    注意顺序: 先在第一个节点上执行,然后其他节点顺序执行,不能同时执行
    根据提示以root用户分别在两个节点上执行脚本:

    node1:
    [root@rac1 ~]#sh /u01/app/oraInventory/orainstRoot.sh
    node2:
    [root@rac2 ~]#sh /u01/app/oraInventory/orainstRoot.sh
    node1:
    [root@rac1 ~]#sh /u01/app/11.2.0/grid/root.sh
    中间按一次回车
    node2:
    [root@rac2 ~]#sh /u01/app/11.2.0/grid/root.sh
    中间按一次回车

    =================================================================================
    rac1 效果如下:

    [root@oracle-rac01 lib64]# /u01/app/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g

    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /u01/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    User ignored Prerequisites during installation
    Installing Trace File Analyzer
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding Clusterware entries to upstart
    CRS-2672: Attempting to start 'ora.mdnsd' on 'oracle-rac01'
    CRS-2676: Start of 'ora.mdnsd' on 'oracle-rac01' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'oracle-rac01'
    CRS-2676: Start of 'ora.gpnpd' on 'oracle-rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oracle-rac01'
    CRS-2672: Attempting to start 'ora.gipcd' on 'oracle-rac01'
    CRS-2676: Start of 'ora.cssdmonitor' on 'oracle-rac01' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'oracle-rac01' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'oracle-rac01'
    CRS-2672: Attempting to start 'ora.diskmon' on 'oracle-rac01'
    CRS-2676: Start of 'ora.diskmon' on 'oracle-rac01' succeeded
    CRS-2676: Start of 'ora.cssd' on 'oracle-rac01' succeeded

    已成功创建并启动 ASM。

    已成功创建磁盘组DATA。

    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 97aa7b7c2eb24fedbf4026f3fd9d184e.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    -- ----- ----------------- --------- ---------
    1. ONLINE 97aa7b7c2eb24fedbf4026f3fd9d184e (/dev/asm-disk1) [DATA]
    Located 1 voting disk(s).
    CRS-2672: Attempting to start 'ora.asm' on 'oracle-rac01'
    CRS-2676: Start of 'ora.asm' on 'oracle-rac01' succeeded
    CRS-2672: Attempting to start 'ora.DATA.dg' on 'oracle-rac01'
    CRS-2676: Start of 'ora.DATA.dg' on 'oracle-rac01' succeeded
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded ~!!!!!!
    ---------------------------------------------------------------
    至此,这样,才叫安装成功。
    此时,集群件相关的服务已经启动。当然,ASM实例也将在两个节点上启动。

    rac2 效果如下:
    [root@rac2 ~]# sh /u01/app/11.2.0/grid/root.sh
    Performing root user operation for Oracle 11g

    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /u01/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...


    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    User ignored Prerequisites during installation
    Installing Trace File Analyzer
    OLR initialization - successful
    Adding Clusterware entries to upstart
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded ~!!!!!!

    =================================================================================

    问题1:
    /u01/app/grid/11.2.0/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
    Failed to create keys in the OLR at /u01/app/grid/11.2.0/crs/install/crsconfig_lib.pm line 7497.
    方法1:
    安装与 libcap 有关的包就可以解决这个问题,安装包之后再重新执行root.sh
    yum -y install libcap*

    方法2:
    [root@rac1 lib64]# find / -name libcap.so.2
    /lib64/libcap.so.2
    [root@rac1 lib64]# cd /lib64/
    [root@rac1 lib64]# ln -s libcap.so.2.16 libcap.so.1
    [root@rac1 lib64]# ls -l libcap
    libcap-ng.so.0 libcap.so libcap.so.2
    libcap-ng.so.0.0.0 libcap.so.1 libcap.so.2.16

    -----------------------------------------------------------------------
    问题1:

    两个节点都 ls -l /tmp 看看,对比一下权限
    (注意:删除RAC2: /TMP/CVU目录,重新建立!!! 注意权限是GRID)
    -----------------------------------------------------------------------
    问题2:
    [INS-20802] Oracle Cluster Verification Utility failed
    原因:scan ip已经存在
    解决办法:
    ping scan ip,如果可以ping通,忽略,跳过即可。

    ---------------------------------------------------------------------------
    在2个节点上都执行,看看集群软件是否启动好:

    [root@rac1 app]# su - grid
    rac1-> crs_stat -t
    Name Type Target State Host
    ------------------------------------------------------------
    ora....N1.lsnr ora....er.type ONLINE ONLINE rac1
    ora.OCR.dg ora....up.type ONLINE ONLINE rac1
    ora.asm ora.asm.type ONLINE ONLINE rac1
    ora.cvu ora.cvu.type ONLINE ONLINE rac1
    ora.gsd ora.gsd.type OFFLINE OFFLINE
    ora....network ora....rk.type ONLINE ONLINE rac1
    ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
    ora.ons ora.ons.type ONLINE ONLINE rac1
    ora....SM1.asm application ONLINE ONLINE rac1
    ora.rac1.gsd application OFFLINE OFFLINE
    ora.rac1.ons application ONLINE ONLINE rac1
    ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
    ora....SM2.asm application ONLINE ONLINE rac2
    ora.rac2.gsd application OFFLINE OFFLINE
    ora.rac2.ons application ONLINE ONLINE rac2
    ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
    ora....ry.acfs ora....fs.type ONLINE ONLINE rac1
    ora.scan1.vip ora....ip.type ONLINE ONLINE rac1

    因为我们使用 CTSS 来同步的,验证 CTSS:
    [grid@rac1 u01]$ crsctl check ctss
    CRS-4701: The Cluster Time Synchronization Service is in Active mode.
    CRS-4702: Offset (in msec): 0

    至此,GI 安装结束。
    ------------------------------------------------------------------------------
    [root@rac1 bin]# /u01/app/11.2.0/grid/bin/crs_stat -t


    安装到最后,内网RAC会出现一个报错:
    问题1:
    INS-20802] Oracle Cluster Verification Utility failed.

    原因:scan ip已经存在
    解决办法:
    ping scan-cluster
    如果可以ping通,忽略,跳过即可

    =================================================================================
    //如果显示没有检测到这个命令,检测PATH路径,是否有设置

    [grid@node1 ~]$ echo $PATH


    五:ASM配置
    注:asm配置只在rac1一台服务器上安装即可

    创建ASM磁盘组
    以grid用户创建ASM磁盘组,创建的ASM磁盘组为下一步创建数据库提供存储。
    grid用户登录图形界面,执行asmca命令来创建磁盘组:
    [root@rac1 bin]# su - grid
    rac1-> env | grep ORA
    ORACLE_SID=+ASM1
    ORACLE_BASE=/u01/app/grid
    ORACLE_TERM=xterm
    ORACLE_HOME=/u01/app/11.2.0/grid
    rac1-> export DISPLAY=10.20.100.114:0.0
    rac1-> export LANG=en_US
    rac1-> xhost +
    access control disabled, clients can connect from any host
    rac1-> xclock
    rac1-> asmca
    rac1-> exit
    进入ASMCA配置界面后,单击Create,创建新的磁盘组:

    1.输入磁盘组名 DATA,冗余策略选择External,磁盘选择ORCL:VOL3,单击OK DATA磁盘组创建完成,单击OK:
    2.继续创建磁盘组,磁盘组名FLASH,冗余策略选择 External,磁盘选择ORCL:VOL4:
    最后,完成DATA、FLASH磁盘组的创建,Exit推出ASMCA图形配置界面:
    至此,利用ASMCA创建好DATA、FLASH磁盘组。
    且,可以看到连同之前创建的GRIDDG 3个磁盘组均已经被RAC双节点MOUNT。

    =======================================================================
    六、创建数据库Oracle 软件==========>>>>>>>>>>>>>>>>>>
    注:Oracle安装只在rac1一台服务器上安装即可

    #su - oracle
    rac1-> export DISPLAY=10.20.100.128:0.0
    rac1-> export LANG=en_US
    rac1-> xhost +
    rac1-> xclock
    rac1-> ./runInstaller

    用 oracle 用户, 连上节点 1, 进入 Oracle 的安装目录, 安装 Oracle 软件,
    注意这里只安装软件。
    不需要邮件
    跳过升级
    只安装数据库软件
    设置 SSH -->> 输入 Oracle密码 -->> setup -->> ssh 互信测试
    RAC数据库安装
    添加简体中文
    选择企业版
    之前环境变量设置过
    选择oracle用户组,选择“oper” Next:
    造成原因是因为没有配置DNS,在这里可以忽略
    root用户在rac1和rac2都要执行脚本

    [root@rac1 lib64]# sh /u01/app/oracle/product/11.2.0/db_1/root.sh
    [root@rac2 lib64]# sh /u01/app/oracle/product/11.2.0/db_1/root.sh

    安装完成


    =====================>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    七、创建RAC集群数据库
    注:Oracle安装只在rac1一台服务器上安装即可

    以oracle用户
    登录图形界面,执行dbca,进入DBCA的图形界面:
    [root@rac1 bin]# su - oracle
    rac1-> export DISPLAY=10.20.100.128:0.0
    rac1-> source .bash_profile
    rac1-> export LANG=en_US
    rac1-> xhost +
    access control disabled, clients can connect from any host
    rac1-> xclock
    rac1-> dbca
    选择第1项,创建RAC数据库
    选择创建数据库选项
    选择创建通用数据库,Next:
    配置类型选择Admin-Managed,输入数据库名baydb,选择双节点,Next:
    选择默认,配置OEM、启用数据库自动维护任务,Next:
    选择数据库用户使用同一口令,Next
    数据库存储选择ASM,使用OMF,数据区选择之前创建的DATA磁盘组,Next:
    指定数据库闪回区,选择之前创建好的FLASH磁盘组,Next:
    选择创建数据库自带Sample Schema,Next:
    选择数据库字符集,AL32UTF8,Next:
    选择默认数据库存储信息,直接Next:
    单击,Finish,开始创建数据库,Next:
    创建数据库可能持续时间稍长:
    完成创建数据库。
    至此,我们完成创建RAC数据库!!!

    ASM密码为:bayaim

    https://rac1:1158/em

    netmgr

    [root@rac1 ~]# cat /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora
    # tnsnames.ora.rac1 Network Configuration File: /u01/app/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora.rac1
    # Generated by Oracle configuration tools.

    RACDB1 =
    (DESCRIPTION =
    (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.1.27)(PORT = 1521))
    )
    (CONNECT_DATA =
    (SERVICE_NAME = racdb1)
    )
    )

    RACDB =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = scan-cluster)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = racdb)
    )
    )

    [root@rac2 ~]# su - grid
    rac2-> lsnrctl status
    -------------------------------------------
    su - grid
    listerner 不建
    listerner_scan 配置监听racdb1 这个数据库
    建立:1. racdb1
    2. racdb2
    su - oracle
    listerner 不建
    tnsnames oracle 自动建立了 RACDB (ip:scan-cluster)

    -----------------------------------------------------------------
    问题4:
    Oracle RAC 11.2.0.4 – RHRL 6.4: DiskGroup resource are not running on nodes. Database instance may not come up on these nodes
    解决办法:
    请见我写的:
    RAC_11.2.0.4 – RHRL 6.4 DiskGroup resource.docx


    -----------------------------------------------------------------
    问题5:DBCA建库找不到ASM磁盘一例

    [root@rac2 grid]# su - grid
    rac2-> cd /u01/app/11.2.0/grid/bin
    rac1-> ls -l oracle
    -rwsr-s--x 1 grid oinstall 209836184 Apr 9 17:50 oracle 文件权限正确
    -rwxr-x--x 1 grid oinstall 209914519 9月 7 14:42 oracle 文件权限错误
    [root@rac1 bin]# chmod 751 oracle
    [root@rac1 bin]# chmod 6751 oracle

    [root@rac2 grid]# su - oracle
    rac2-> cd $ORACLE_HOME/bin
    /u01/app/oracle/product/11.2.0/db_1/bin
    rac2-> ls -l oracle
    -rwsr-s--x 1 oracle oinstall 239501424 Apr 9 20:03 oracle 文件权限正确

    修改权限:

    -rwsr-s--x 1 oracle asmdba 239626641 Apr 17 14:34 oracle
    rac1-> chown -R oracle:oinstall oracle
    rac1-> ls -l oracle
    -rwxr-x--x 1 oracle oinstall 239626641 Apr 17 14:34 oracle
    rac1-> chmod 6751 oracle


    rac1-> source .bash_profile
    rac1-> export ORACLE_SID=baydb1
    rac1-> sqlplus / as sysdba

    ============================================================================================
    oracle学习之asm实例的数据库启动方式

    1.先启动asm相关服务:
    crsctl start resource ora.cssd

    2.启动asm实例;
    sqlplus /nolog
    SQL> conn / as sysasm
    SQL>startup

    3.启动数据库:
    sqlplus /nolog
    SQL> conn / as sysdba
    SQL>startup

    修改1:
    chmod +s oracle
    重新回到DBCA建库界面继续执行,则会成功发现ASM磁盘组,并成功建库。

    $ORACLE_HOME/bin/oracle该文件的默认权限为 751权限,有个s权限,如果s权限消失,这时通过OS认证将不能登录到数据库;

    -----------------------------------------------------------------
    问题6: dbca asm ora 01017错误

    修改2:【bayaim注意 坑逼,node1、node2都要执行】
    usermod -a -G asmdba,asmadmin,asmoper,dba,oinstall oracle
    usermod -a -G asmdba,asmadmin,asmoper,dba,oinstall grid


    通过截图报错查看dbca日志和所示的oraagent_oracle.log

    dbca日志 :
    /u01/app/oracle/cfgtoollogs/dbca

    /app/grid/11.2.0/log/rac2/agent/crsd/oraagent_oracle

    -----------------------------------------------------------------


    ORA-19624 ORA-19870 ORA-19504 ORA-17502
    今天做了个很简单的实验,11g asm建库的时候报错如下:
    其实就是很简单的权限问题
    [root@rac1 dev]# ls -l /dev/sd*
    brw-rw---- 1 root disk 8, 0 Apr 11 13:46 /dev/sda
    brw-rw---- 1 root disk 8, 1 Apr 11 13:46 /dev/sda1
    brw-rw---- 1 root disk 8, 2 Apr 11 13:46 /dev/sda2
    brwxrwxr-x 1 grid asmadmin 8, 16 Apr 11 13:47 /dev/sdb
    brw-rw---- 1 grid asmadmin 8, 17 Apr 11 13:47 /dev/sdb1
    brwxrwxr-x 1 grid asmadmin 8, 32 Apr 11 13:47 /dev/sdc
    brw-rw---- 1 grid asmadmin 8, 33 Apr 11 13:47 /dev/sdc1
    brwxrwxr-x 1 grid asmadmin 8, 48 Apr 11 13:47 /dev/sdd
    brw-rw---- 1 grid asmadmin 8, 49 Apr 11 13:47 /dev/sdd1
    brwxrwxr-x 1 grid asmadmin 8, 64 Apr 11 13:47 /dev/sde
    brw-rw---- 1 grid asmadmin 8, 65 Apr 11 13:47 /dev/sde1

    [root@rac1 ~]# fdisk -l | grep /dev/sd
    Disk /dev/sda: 64.4 GB, 64424509440 bytes, 125829120 sectors
    /dev/sda1 * 2048 2099199 1048576 83 Linux
    /dev/sda2 2099200 125829119 61864960 8e Linux LVM
    Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
    /dev/sdb1 2048 2097151 1047552 83 Linux
    Disk /dev/sdc: 524 MB, 524288000 bytes, 1024000 sectors
    /dev/sdc1 102399 1023999 460800+ 83 Linux
    Disk /dev/sdd: 8589 MB, 8589934592 bytes, 16777216 sectors
    /dev/sdd1 2048 16777215 8387584 83 Linux
    Disk /dev/sde: 524 MB, 524288000 bytes, 1024000 sectors
    /dev/sde1 2048 1023999 510976 83 Linux
    [root@rac1 ~]# chown -R grid:asmadmin /dev/sdb
    [root@rac1 ~]# chown -R grid:asmadmin /dev/sdc
    [root@rac1 ~]# chown -R grid:asmadmin /dev/sdd
    [root@rac1 ~]# chown -R grid:asmadmin /dev/sde
    [root@rac1 ~]# chmod 775 /dev/sdb
    [root@rac1 ~]# chmod 775 /dev/sdc
    [root@rac1 ~]# chmod 775 /dev/sdd
    [root@rac1 ~]# chmod 775 /dev/sde
    [root@rac1 ~]#

    spfile:+DTA/orcl/spfileorcl.ora

    The Database Control URL is http://rac1:1158/em

    tnsnames.ora:

    [root@rac2 admin]# pwd
    /u01/app/11.2.0/grid/network/admin

    rac1-vip-> env | grep ORA
    ORACLE_SID=+ASM1
    ORACLE_BASE=/u01/app/grid
    ORACLE_TERM=xterm
    ORACLE_HOME=/u01/app/11.2.0/grid
    rac1-vip-> cd /u01/app/11.2.0/grid
    rac1-vip->


    [root@rac2 ~]# find / -name crsctl
    /u01/app/11.2.0/grid/bin/crsctl
    [root@node1 bin]# ./crsctl start has

    以上has启动命令需要在每个节点分别执行

    查看节点状态

    /u01/app/11.2.0/grid/bin/crs_stat -t

    /u01/app/11.2.0/grid/bin/srvctl


    [grid@node1 ~]$ crs_stat -t -v 或者 crsctl status resource -t

    2、启动集群(cluster)

    [root@node1 ~]# ./crsctl start cluster -all --所有节点同时启动


    [root@node1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all --所有节点同时启动

    [root@rac1 log]# ll /dev/raw/*


    11g cluster 的启动顺序是:
    ohash
    cssd
    crsd
    evmd

    ==========================

    问题:

    1. 如果虚拟机前期磁盘不够,不要关机增加磁盘,这样磁盘号:sdk* 就变动,进而影响ASM对应规则,一定要在线扩。

    [root@rac2 ~]# ls -lh /u01/app/11.2.0/grid/crf/db/rac2/crfclust.bdb
    -rw-r----- 1 root root 128K Apr 23 14:59 /u01/app/11.2.0/grid/crf/db/rac2/crfclust.bdb


    [root@rac2 /]# cat /dev/null >./root/.xsession-errors
    You have new mail in /var/spool/mail/root
    [root@rac2 /]# ls -lh ./root/.xsession-errors
    -rw------- 1 root root 260K Apr 30 13:26 ./root/.xsession-errors


    查看数据库:
    C:UsersAdministrator>sqlplus sys/bayaim@rac_baydb as sysdba


    [oracle@rac2 ~]$ sqlplus / as sysdba
    SQL> select instance_name,status from v$instance;

    INSTANCE_NAME STATUS
    ---------------- ------------
    racdb1 OPEN
    SQL> SELECT a.NAME,a.DATABASE_ROLE,a.OPEN_MODE,a.LOG_MODE FROM V$DATABASE a;

    alter user user1 account unlock;

  • 相关阅读:
    WEB服务器和应用服务器
    java中乱码问题
    面向对象的特征
    数据库中常见的需注意的问题
    String类
    网络编程
    C#泛型基础
    C#中sealed关键字的作用。
    C#自动属性优缺点分析
    TextView属性(转)
  • 原文地址:https://www.cnblogs.com/bayaim/p/11120249.html
Copyright © 2020-2023  润新知