1.安装操作系统,如果可以最好所有的包都安装上,创建用户,组,更改核心参数,.bash_profile
2。配制ssh。保证新的结点和原有的节点互信。
3。安装asmlib,然后用
/etc/init.d/oracleasm scandisks
验证是否识别到
/etc/init.d/oracleasm querydisks
4。配制/etc/hosts文件
如果没有问题,那么就可以了。
在运行数据库边删除节点:
1。删除实例。dbca,会出现unable to copy slavedb: /etc/oratab to /tmp/oratab.slavedb这种类似错误,直接ok.
2. 删除asm实例。节点名称就是新安装的节点的名称。
srvctl stop asm -n <nodename>
srvctl remove asm -n <nodename>
Verify that asm is removed with:
srvctl config asm -n <nodename>
3。停掉节点app节点
srvctl stop nodeapps -n nodename
4.删除监听netca,由于listener文件已经不存在,因此,如果单独删除出现问题那个节点的将不能,可以直接在不存在的节点创建目录
$ORACLE_HOME/network/admin然后把另外一个实例的listener.ora文件拷贝过来,把里边的监听名字更改成本机的
如listener_masterdb更改成listener_slavedb slavedb就是重新安装系统的这台机器。
5.察看crs_stat 关于db的ora.<db_name>.db 要在正常运行的那个节点。如果没有需要用 crs_relocate ora.<db_name>.db
6。使用root用户删除节点的应用
#srvctl remove nodeapps -n <nodename>
例如:
[root@masterdb bin]# ./srvctl remove nodeapps -n slavedb
Please confirm that you intend to remove the node-level applications on node slavedb (y/[n]) y
7。使用图形界面执行
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 CLUSTER_NODES=node1
注意上边的ORACLE_HOME是rdbms的
8.使用root用户执行
$ORA_CRS_HOME/install/./rootdelete.sh remote nosharedvar
remote nosharedvar是在正常节点删除损坏节点的操作
如果是正常删除,那么在删除节点执行
$ORA_CRS_HOME/install/./rootdelete.sh
9。删除节点
[root@masterdb bin]# ./olsnodes -n
masterdb 1
slavedb 2
在active 节点执行
#$ORA_CRS_HOME/install> ./.rootdeletenode.sh <node2>,2
[root@masterdb install]# ./rootdeletenode.sh slavedb,2
CRS-0210: Could not find resource 'ora.slavedb.LISTENER_SLAVEDB.lsnr'.
CRS-0210: Could not find resource 'ora.slavedb.ons'.
CRS-0210: Could not find resource 'ora.slavedb.vip'.
CRS-0210: Could not find resource 'ora.slavedb.gsd'.
CRS-0210: Could not find resource ora.slavedb.vip.
CRS nodeapps are deleted successfully
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully deleted 14 values from OCR.
Key SYSTEM.css.interfaces.nodeslavedb marked for deletion is not there. Ignoring.
Successfully deleted 5 keys from OCR.
Node deletion operation successful.
'slavedb,2' deleted successfully
10.验证上边是否删除成功
[root@masterdb bin]# $ORA_CRS_HOME/bin/olsnodes -n
masterdb 1
11.使用oracle用户执行,注意使用图形界面
$ORA_CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=<CRS Home> CLUSTER_NODES=masterdb
注意这里的oracle_home是crs的
12。最后更新
$ORA_CRS_HOME/oui/bin/runInstaller -updateNodeList -local ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=masterdb
在运行数据库添加节点:
1。$ORA_CRS_HOME/oui/bin/addNode.sh
中间添加需要增加的节点的信息,最后按顺序执行以下三个文件:
a。在新的节点执行[root@slavedb oraInventory]# sh orainstRoot.sh
b。在现有节点执行
/oracle/app/oracle/product/10.2/crs/install/rootaddnode.sh
[root@masterdb bin]# /oracle/app/oracle/product/10.2/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: slavedb slavedb_priv slavedb
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/oracle/app/oracle/product/10.2/crs/bin/srvctl add nodeapps -n slavedb -A slavedb_vip/255.255.255.0/eth0 -o /oracle/app/oracle/product/10.2/crs
c.在新的节点执行:
#/oracle/app/oracle/product/10.2/root.sh
[root@slavedb crs]# sh root.sh
WARNING: directory '/oracle/app/oracle/product/10.2' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS = /dev/raw/raw1
OCR backup directory '/oracle/app/oracle/product/10.2/crs/cdata/crs' does not ex
ist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: masterdb masterdb_priv masterdb
node 2: slavedb slavedb_priv slavedb
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
masterdb
slavedb
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (0) nodes.
Creating GSD application resource on (0) nodes.
Creating ONS application resource on (0) nodes.
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/oracle/app/oracle/product/10.2/crs/log/slavedb/racg/ora.slavedb.gsd.log" for more details
..
Starting ONS application resource on (2) nodes.1:CRS-0233: Resource or relatives are currently involved with another operation.
Check the log file "/oracle/app/oracle/product/10.2/crs/log/slavedb/racg/ora.slavedb.ons.log" for more details
..
Done.
然后通过crs_stat -t察看
2。添加数据库软件
在图形界面执行:
$$ORACLE_HOME/oui/bin/addNode.sh
3.添加实例
dbca
然后选择添加实例
最后完成后,如果是使用asm实例,那么
10. DBCA verifies the new node oradb5,
and as the database is configured to use ASM,
prompts with the message “ASM is present on the cluster but needs to be extended to the following nodes: [oradb5].
Do you want ASM to be extended?” Click on Yes to add ASM to the new instance.
11. In order to create and start the ASM instances on the new node,
Oracle requires the Listener to be present and started.
DBCA prompts with requesting permission to configure the listener using port 1521
and listener name LISTENER_ORADB5.
Click on Yes if the default port is good,
else click on No and manually execute NetCA on oradb5 to create the listener using a different port.
点ok就可以了这样,直接等到创建成功
如果在添加rdbms时候包oui-10009错误,执行
$ORACLE_HOME/oui/bin/runInstaller -updateNOdeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={node1}"