Oracle/RAC

KSH to BASH
export PS1="[\u@\h \W]\\$ " ; bash

Guides

 * So you want to play with Oracle 11g’s RAC? Here’s how. | StartOracle - http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/


 * Build Your Own Oracle RAC 11g Cluster on Oracle Enterprise Linux and iSCSI - http://www.oracle.com/technetwork/articles/hunter-rac11gr2-iscsi-088677.html


 * ORACLE-BASE - Oracle Database 11g Release 1 (11.1) Installation On Enterprise Linux 4.5 and 5.0 - http://www.oracle-base.com/articles/11g/OracleDB11gR1InstallationOnEnterpriseLinux4and5.php

Cluster Registry Service CRS
export ORACLE_OWNER=oracle export ORACLE_HOME=/export/oracle/app/oracle/product/10.2.0/crs export ORACLE_BASE=/export/oracle/app/oracle export ORADATA=/export/oracle/app/oracle/oradata

export PATH=$PATH:$ORACLE_HOME/bin export NLS_LANG=

/export/oracle/app/oracle/oradata/ORACRS_OCR_1.cnf /export/oracle/app/oracle/oradata/ORACRS_OCR_2.cnf

/export/oracle/app/oracle/oradata/ORACRS_QUORUM_1.vot /export/oracle/app/oracle/oradata/ORACRS_QUORUM_2.vot /export/oracle/app/oracle/oradata/ORACRS_QUORUM_3.vot

/export/oracle/app/oracle/oraInventory/orainstRoot.sh /export/oracle/app/oracle/product/10.2.0/crs/root.sh

vim /export/oracle/app/oracle/product/10.2.0/crs/bin/vipca vim /export/oracle/app/oracle/product/10.2.0/crs/bin/srvctl unset LD_ASSUME_KERNEL vipca -silent

vipca

Database Software Installation
DB:

export ORACLE_OWNER=oracle export ORACLE_HOME=/export/oracle/app/oracle/product/10.2.0/db export ORACLE_BASE=/export/oracle/app/oracle

export PATH=$PATH:$ORACLE_HOME/bin

/export/oracle/app/oracle/product/10.2.0/db/root.sh

vim /export/oracle/app/oracle/product/10.2.0/db/bin/srvctl

Cluster Registry Service CRS Patch
/export/oracle/app/oracle/product/10.2.0/crs/bin/crsctl stop crs /export/oracle/app/oracle/product/10.2.0/crs/install/root102.sh

Cluster Database Patch
/export/oracle/app/oracle/product/10.2.0/db/root.sh

Oracle Commands
ocrcheck crsctl check crs oifcfg getif crs_stat -t crsctl query css votedisk

crsctl
Usage: crsctl check crs          - checks the viability of the CRS stack crsctl check cssd         - checks the viability of CSS crsctl check crsd         - checks the viability of CRS crsctl check evmd         - checks the viability of EVM crsctl set   css  - sets a parameter override crsctl get   css - gets the value of a CSS parameter crsctl unset css - sets CSS parameter to its default crsctl query css votedisk    - lists the voting disks used by CSS crsctl add   css votedisk - adds a new voting disk crsctl delete css votedisk - removes a voting disk crsctl enable crs    - enables startup for all CRS daemons crsctl disable crs   - disables startup for all CRS daemons crsctl start crs - starts all CRS daemons. crsctl stop crs  - stops all CRS daemons. Stops CRS resources in case of cluster. crsctl start resources - starts CRS resources. crsctl stop resources - stops  CRS resources. crsctl debug statedump evm - dumps state info for evm objects crsctl debug statedump crs - dumps state info for crs objects crsctl debug statedump css - dumps state info for css objects crsctl debug log css [module:level]{,module:level} ... - Turns on debugging for CSS crsctl debug trace css - dumps CSS in-memory tracing cache crsctl debug log crs [module:level]{,module:level} ... - Turns on debugging for CRS crsctl debug trace crs - dumps CRS in-memory tracing cache crsctl debug log evm [module:level]{,module:level} ... - Turns on debugging for EVM crsctl debug trace evm - dumps EVM in-memory tracing cache crsctl debug log res  turns on debugging for resources crsctl query crs softwareversion [ ] - lists the version of CRS software installed crsctl query crs activeversion - lists the CRS software operating version crsctl lsmodules css - lists the CSS modules that can be used for debugging crsctl lsmodules crs - lists the CRS modules that can be used for debugging crsctl lsmodules evm - lists the EVM modules that can be used for debugging

If necesary any of these commands can be run with additional tracing by adding a "trace" argument at the very front. Example: crsctl trace check css

Failure at final check of Oracle CRS stack
/softw/app/oracle/product/10.2.0/crs/root.sh: Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Failure at final check of Oracle CRS stack

Linux « Thomas Vogt’s IT Blog:
 * "I found more than one issue that can help solving the problem."


 * Use a dedicated Switch for the Cluster Interconnect.
 * Set the Cluster Interconnetc MTU=1500 (f.e. change later to MTU=9000).
 * Add only two Cluster Nodes to the CRS with the initial installation and add the other Nodes with addNode.sh Script seperately to the Cluster.

Adding New Nodes to Your Oracle RAC 10g Cluster on Linux:
 * "A step-by-step guide to adding a node to an existing Oracle RAC 10g Release 2 cluster


 * In most businesses, a primary business requirement for an Oracle Real Application Clusters (RAC) configuration is scalability of the database tier across the entire system—so that when the number of users increases, additional instances can be added to the cluster to distribute the load.


 * In Oracle RAC 10g, this specific feature has become much easier. Oracle incorporates the plug-and-play feature with a few minimum steps of setup, after the node/instance is brought to a usable state.


 * In this article, I will discuss the steps required to add a node to an existing Oracle RAC 10g Release 2 cluster."

vipca
vipca -silent

If you get this error, you are not to the last screen of the install (Oracle Cluster Verification Utility). Simply proceed the install to the last verification stage. Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Error 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])]

If the Oracle Cluster Verification Utility fails, run vipca and specify the virtual IP aliases: vipca

LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL
 * 1) Remove this workaround when the bug 3937317 is fixed

/softw/app/oracle/product/10.2.0/crs/bin/vipca /softw/app/oracle/product/10.2.0/crs/bin/srvctl

RHEL53
CRS:

export ORACLE_OWNER=oracle export ORACLE_HOME=/export/oracle/app/oracle/product/10.2.0/crs export ORACLE_BASE=/export/oracle/app/oracle export ORADATA=/export/oracle/app/oracle/oradata

export PATH=$PATH:$ORACLE_HOME/bin export NLS_LANG=

/export/oracle/app/oracle/oradata/ORACRS_OCR_1.cnf /export/oracle/app/oracle/oradata/ORACRS_OCR_2.cnf

/export/oracle/app/oracle/oradata/ORACRS_QUORUM_1.vot /export/oracle/app/oracle/oradata/ORACRS_QUORUM_2.vot /export/oracle/app/oracle/oradata/ORACRS_QUORUM_3.vot

/export/oracle/app/oracle/oraInventory/orainstRoot.sh /export/oracle/app/oracle/product/10.2.0/crs/root.sh

vim /export/oracle/app/oracle/product/10.2.0/crs/bin/vipca vim /export/oracle/app/oracle/product/10.2.0/crs/bin/srvctl unset LD_ASSUME_KERNEL vipca -silent

vipca

DB:

export ORACLE_OWNER=oracle export ORACLE_HOME=/export/oracle/app/oracle/product/10.2.0/db export ORACLE_BASE=/export/oracle/app/oracle

export PATH=$PATH:$ORACLE_HOME/bin

/export/oracle/app/oracle/product/10.2.0/db/root.sh

vim /export/oracle/app/oracle/product/10.2.0/db/bin/srvctl

clusterware Patch:

/export/oracle/app/oracle/product/10.2.0/crs/bin/crsctl stop crs /export/oracle/app/oracle/product/10.2.0/crs/install/root102.sh

DB Patch:

/export/oracle/app/oracle/product/10.2.0/db/root.sh

RHEL48 CRS
export ORACLE_OWNER=oracle export ORACLE_HOME=/export/oracle/app48/oracle/product/10.2.0/crs export ORACLE_BASE=/export/oracle/app48/oracle export PATH=$PATH:$ORACLE_HOME/bin

/export/oracle/app48/oracle/oradata/ORACRS_OCR_1.cnf /export/oracle/app48/oracle/oradata/ORACRS_OCR_2.cnf

/export/oracle/app48/oracle/oradata/ORACRS_QUORUM_1.vot /export/oracle/app48/oracle/oradata/ORACRS_QUORUM_2.vot /export/oracle/app48/oracle/oradata/ORACRS_QUORUM_3.vot

/export/oracle/app48/oracle/oraInventory/orainstRoot.sh /export/oracle/app48/oracle/product/10.2.0/crs/root.sh

Oracle Commands
ocrcheck crsctl check crs oifcfg getif crs_stat -t

Post-check for cluster services setup: /export/oracle/app/oracle/product/10.2.0/crs/bin/cluvfy stage -post crsinst -n hdsbebdsdp1,hdsbebdsdp2,hdsbebdsdp3,hdsbebdsdp4

[root@hdsbebdsdp2 ~]# ocrcheck Status of Oracle Cluster Registry is as follows : Version                 :          2 Total space (kbytes)    :     168564 Used space (kbytes)     :       1988 Available space (kbytes) :    166576 ID                      : 1080342256 Device/File Name        : /export/oracle/app/oracle/oradata/ORACRS_OCR_1.cnf Device/File integrity check succeeded Device/File Name        : /export/oracle/app/oracle/oradata/ORACRS_OCR_2.cnf Device/File integrity check succeeded

Cluster registry integrity check succeeded

[root@hdsbebdsdp1 ~]# crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy

[root@hdsbebdsdp1 ~]# oifcfg getif bond0 172.17.6.0  global  public vlan300 172.17.6.192  global  cluster_interconnect

[root@hdsbebdsdp1 ~]# crs_stat -t Name          Type           Target    State     Host

ora....dp1.gsd application   ONLINE    ONLINE    hdsbebdsdp1 ora....dp1.ons application   ONLINE    ONLINE    hdsbebdsdp1 ora....dp1.vip application   ONLINE    ONLINE    hdsbebdsdp1 ora....dp2.gsd application   ONLINE    ONLINE    hdsbebdsdp2 ora....dp2.ons application   ONLINE    ONLINE    hdsbebdsdp2 ora....dp2.vip application   ONLINE    ONLINE    hdsbebdsdp2 ora....dp3.gsd application   ONLINE    ONLINE    hdsbebdsdp3 ora....dp3.ons application   ONLINE    ONLINE    hdsbebdsdp3 ora....dp3.vip application   ONLINE    ONLINE    hdsbebdsdp3 ora....dp4.gsd application   ONLINE    ONLINE    hdsbebdsdp4 ora....dp4.ons application   ONLINE    ONLINE    hdsbebdsdp4 ora....dp4.vip application   ONLINE    ONLINE    hdsbebdsdp4

[root@hdsbebdsdp2 ~]# crsctl stop crs Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.

oracle_cluster: crs hdsbebdsdp1:clbebd1:hdsbebdsdp1-vip hdsbebdsdp2:clbebd2:hdsbebdsdp2-vip hdsbebdsdp3:clbebd3:hdsbebdsdp3-vip hdsbebdsdp4:clbebd4:hdsbebdsdp4-vip

$ ocrcheck Status of Oracle Cluster Registry is as follows : Version                 :          2 Total space (kbytes)    :     168564 Used space (kbytes)     :         40 Available space (kbytes) :    168524 ID                      : 1948804438 Device/File Name        : /export/oracle/app/oracle/oradata/ORACRS_OCR_1.cnf Device/File integrity check succeeded Device/File Name        : /export/oracle/app/oracle/oradata/ORACRS_OCR_2.cnf Device/File integrity check succeeded

Cluster registry integrity check succeeded

$ ocrconfig -overwrite

export NLS_LANG=

vipca -silent

rm -rf /etc/{oracle,oratab,oraInst.loc} /etc/init.d/init.{crs,crsd,cssd,evmd} \ /export/oracle/app/oracle/{oraInventory,product} /usr/local/bin/{coraenv,dbhome,oraenv} rm -rf /tmp/* rm -rf /export/oracle/*

for i in 1 2 3 4 5 ; do dd if=/dev/zero of=/dev/raw/raw$i ; done

export ORADATA=/export/oracle/app/oracle/oradata mkdir -p $ORADATA ln -s /dev/raw/raw1 $ORADATA/ORACRS_OCR_1.cnf ln -s /dev/raw/raw2 $ORADATA/ORACRS_OCR_2.cnf ln -s /dev/raw/raw3 $ORADATA/ORACRS_QUORUM_1.vot ln -s /dev/raw/raw4 $ORADATA/ORACRS_QUORUM_2.vot ln -s /dev/raw/raw5 $ORADATA/ORACRS_QUORUM_3.vot chown -R oracle:oinstall $ORADATA chown oracle:oinstall $ORADATA/.. chown oracle:oinstall $ORADATA/../.. chown oracle:oinstall $ORADATA/../../..

ERROR: Expecting the CRS daemons to be up within 600 seconds. Failure at final check of Oracle CRS stack. 10

This will happen if the OCR/Voting files were not cleared and the cluster nodes have changed

NETWORK MIGRATION
NOTE: AS THE ROOT USER!

Cluster instance virtual IP addresses must be relocated to private1 network. To do so, we will create one file *.cap per instance registered in the cluster. The file must be named ora. .vip.cap and contain the following information:

The current configuration can be obtained using: crs_stat -p ora.hdsbebdsdp1.vip

NAME=ora.[hdsbebdsdp1].vip TYPE=application ACTION_SCRIPT=/export/oracle/app/oracle/product/10.2.0/crs/bin/racgwrap ACTIVE_PLACEMENT=1 AUTO_START=1 CHECK_INTERVAL=30 DESCRIPTION=CRS application for VIP on a node FAILOVER_DELAY=0 FAILURE_INTERVAL=0 FAILURE_THRESHOLD=0 HOSTING_MEMBERS=[hdsbebdsdp1] OPTIONAL_RESOURCES= PLACEMENT=favored REQUIRED_RESOURCES= RESTART_ATTEMPTS=0 SCRIPT_TIMEOUT=60 START_TIMEOUT=0 STOP_TIMEOUT=0 UPTIME_THRESHOLD=7d USR_ORA_ALERT_NAME= USR_ORA_CHECK_TIMEOUT=0 USR_ORA_CONNECT_STR=/ as sysdba USR_ORA_DEBUG=0 USR_ORA_DISCONNECT=false USR_ORA_FLAGS= USR_ORA_IF=[bond0] USR_ORA_INST_NOT_SHUTDOWN= USR_ORA_LANG= USR_ORA_NETMASK=[255.255.255.192] USR_ORA_OPEN_MODE= USR_ORA_OPI=false USR_ORA_PFILE= USR_ORA_PRECONNECT=none USR_ORA_SRV= USR_ORA_START_TIMEOUT=0 USR_ORA_STOP_MODE=immediate USR_ORA_STOP_TIMEOUT=0 USR_ORA_VIP=[172.17.6.35]

crs_stat -p ora.hdsbebdsdp1.vip > /export/oracle/app/oracle/product/10.2.0/crs/crs/profile/ora.hdsbebdsdp1.vip.cap crs_stat -p ora.hdsbebdsdp2.vip > /export/oracle/app/oracle/product/10.2.0/crs/crs/profile/ora.hdsbebdsdp2.vip.cap crs_stat -p ora.hdsbebdsdp3.vip > /export/oracle/app/oracle/product/10.2.0/crs/crs/profile/ora.hdsbebdsdp3.vip.cap crs_stat -p ora.hdsbebdsdp4.vip > /export/oracle/app/oracle/product/10.2.0/crs/crs/profile/ora.hdsbebdsdp4.vip.cap

/export/oracle/app/oracle/product/10.2.0/crs/crs/profile/

modified lines: NAME=ora.hdsbebdsdp1.vip HOSTING_MEMBERS=hdsbebdsdp1 USR_ORA_IF=vlan200 USR_ORA_NETMASK=255.255.255.192 USR_ORA_VIP=172.17.6.163

Stop all the cluster resources, unregister current VIP addresses and register the new ones.

-- From the first machine of the cluster run the next commands as root user -- Stop all the cluster resources crs_stop -all -- Unregister VIP addresses and register the new ones crs_unregister ora. .vip crs_register ora. .vip

-- Start all the necessary resources (one execution per resouce) crs_start

crs_stop -all

crs_unregister ora.hdsbebdsdp1.vip crs_unregister ora.hdsbebdsdp2.vip crs_unregister ora.hdsbebdsdp3.vip crs_unregister ora.hdsbebdsdp4.vip

crs_register ora.hdsbebdsdp1.vip crs_register ora.hdsbebdsdp2.vip crs_register ora.hdsbebdsdp3.vip crs_register ora.hdsbebdsdp4.vip


 * 1) crs_start ora.hdsbebdsdp1.vip
 * 2) crs_start ora.hdsbebdsdp2.vip
 * 3) crs_start ora.hdsbebdsdp3.vip
 * 4) crs_start ora.hdsbebdsdp4.vip

crs_start -all

[root@hdsbebdsdp2 ~]# crs_register ora.hdsbebdsdp2.vip CRS-0181: Cannot access the resource profile '/export/oracle/app/oracle/product/10.2.0/crs/crs/profile/ora.hdsbebdsdp2.vip.cap'.

[root@hdsbebdsdp2 ~]# cp ora* /export/oracle/app/oracle/product/10.2.0/crs/crs/profile [root@hdsbebdsdp2 ~]# chown oracle:oinstall /export/oracle/app/oracle/product/10.2.0/crs/crs/profile/*

CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps The given interface(s), "bond0" is not public. Public interfaces should be used to configure virtual IPs.

Patches
NOTE: As the Oracle user (unless otherwise specified)

List patches for the specified oracle home: ORACLE_HOME=...(db/crs) opatch lsinventory opatch lsinventory -oh $ORACLE_HOME

To apply patches read the included README.TXT

Status Commands
Check commands (as 'root' or 'oracle' user): crsctl check crs crs_stat -t oifcfg getif ocrcheck crsctl query css votedisk

Environment
Environment for both 'root' and 'oracle' users: export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_HOME=$CRS_HOME # for CRS usage export ORACLE_HOME=$DB_HOME # for DB usage

Setup
/softw/app/oracle/oraInventory/orainstRoot.sh /softw/app/oracle/product/10.2.0/crs/root.sh

/altamira_bd/PPGA/shared/ocr1/ocr1 /altamira_bd/PPGA/shared/ocr2/ocr2 /altamira_bd/PPGA/shared/vote1/vote1 /altamira_bd/PPGA/shared/vote2/vote2 /altamira_bd/PPGA/shared/vote3/vote3

PROT-1: Failed to initialize ocrconfig

crsctl stop crs cd /softw/app/sw/patches/7715304 export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db custom/scripts/prerootpatch.sh -crshome $CRS_HOME -crsuser oracle

cd /softw/app/sw/patches/7715304 export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db custom/scripts/prepatch.sh -crshome $CRS_HOME

custom/server/7715304/custom/scripts/prepatch.sh -dbhome $DB_HOME

export PATH=$PATH:$ORACLE_HOME/OPatch yes | $CRS_HOME/Opatch/opatch napply -local -oh $CRS_HOME -id 7715304

yes | $DB_HOME/OPatch/opatch napply custom/server/ -local -oh $DB_HOME -id 7715304

custom/scripts/postpatch.sh -crshome $CRS_HOME

custom/server/7715304/custom/scripts/postpatch.sh -dbhome $DB_HOME

ROOT: custom/scripts/postrootpatch.sh -crshome $CRS_HOME $CRS_HOME/bin/crsctl check crs

ocfs2-tools-1.4.3-1.el5 ocfs2console-1.4.3-1.el5 ocfs2-2.6.18-128.el5-1.4.4-1.el5

create: lun12 (360060e80100520e0052faebe0000000b) HITACHI,DF600F [size=2.0G][features=0][hwhandler=0][n/a] \_ round-robin 0 [prio=16][undef] \_ 6:0:5:0 sdab 65:176 [undef][ready]

[root@hdsbebdsg1 auto]# mkfs.ocfs2 –b 4K –C 128K –N 4 –L ppga_vote1 /dev/mapper/lun12p1 mkfs.ocfs2: Block count bad - 4K

mkfs.ocfs2 -b 4K -C 128K -N 4 -L ppga_vote1 /dev/mapper/lun12p1 mkfs.ocfs2 -b 4K -C 128K -N 4 -L ppga_vote2 /dev/mapper/lun12p2 mkfs.ocfs2 -b 4K -C 128K -N 4 -L ppga_vote3 /dev/mapper/lun12p3 mkfs.ocfs2 -b 4K -C 128K -N 4 -L ppga_ocr1 /dev/mapper/lun12p5 mkfs.ocfs2 -b 4K -C 128K -N 4 -L ppga_ocr2 /dev/mapper/lun12p6

tunefs.ocfs2 -L ppga_vote1 -N 8 /dev/mapper/lun12p1 tunefs.ocfs2 -L ppga_vote2 -N 8 /dev/mapper/lun12p2 tunefs.ocfs2 -L ppga_vote3 -N 8 /dev/mapper/lun12p3 tunefs.ocfs2 -L ppga_ocr1 -N 8 /dev/mapper/lun12p5 tunefs.ocfs2 -L ppga_ocr2 -N 8 /dev/mapper/lun12p6

mount /dev/mapper/lun12p1 /altamira_bd/PPGA/shared/vote1 mount /dev/mapper/lun12p2 /altamira_bd/PPGA/shared/vote2 mount /dev/mapper/lun12p3 /altamira_bd/PPGA/shared/vote3 mount /dev/mapper/lun12p5 /altamira_bd/PPGA/shared/ocr1 mount /dev/mapper/lun12p6 /altamira_bd/PPGA/shared/ocr2

/dev/mapper/lun12p1 on /altamira_bd/PPGA/shared/vote1 type ocfs2 (rw,_netdev,datavolume,heartbeat=local) /dev/mapper/lun12p2 on /altamira_bd/PPGA/shared/vote2 type ocfs2 (rw,_netdev,datavolume,heartbeat=local) /dev/mapper/lun12p3 on /altamira_bd/PPGA/shared/vote3 type ocfs2 (rw,_netdev,datavolume,heartbeat=local) /dev/mapper/lun12p5 on /altamira_bd/PPGA/shared/ocr1 type ocfs2 (rw,_netdev,datavolume,heartbeat=local) /dev/mapper/lun12p6 on /altamira_bd/PPGA/shared/ocr2 type ocfs2 (rw,_netdev,datavolume,heartbeat=local)

Get two node cluster working, then add other nodes.

Start VNC session as oracle user.

Install Clusterware
As 'oracle' user: cat 10201_clusterware_linux_x86_64.cpio.gz | gzip -d | cpio -dmvi cd clusterware ./runInstaller

Reply 'y' to rootpre.sh being run.

The correct paths should already be populated based on the environment variables.

inventory dir: /softw/app/oracle/oraInventory system group: oinstall

Name: OraCrs10g_home path: /softw/app/oracle/product/10.2.0/crs

Prerequisite Check should be zero issues.

Import cluster file: crs hdsbebdsg1:clbebd1:hdsbebdsg1-vip hdsbebdsg2:clbebd2:hdsbebdsg2-vip

Set interfaces to following: bond0 - Public (to be fixed later -> Not Used) vlan200 - Do Not Use (to be fixed later -> Public) vlan300 - Private (Cluster interface)

OCR: (Normal) OCR Location: /altamira_bd/PPGA/shared/ocr1/ocr1 OCR Mirror: /altamira_bd/PPGA/shared/ocr2/ocr2

Vote Disk: (Normal) Vote0: /altamira_bd/PPGA/shared/vote1/vote1 Vote1: /altamira_bd/PPGA/shared/vote2/vote2 Vote2: /altamira_bd/PPGA/shared/vote3/vote3

Install Clusterware

Root commands: (run on both nodes one at a time, this will take some time) /softw/app/oracle/oraInventory/orainstRoot.sh /softw/app/oracle/product/10.2.0/crs/root.sh

Note: Before finishing the last node, fix vipca to avoid the vipca error.

Vipca fix: (as root) fix: http://cs.felk.cvut.cz/10gr2/relnotes.102/b15659/toc.htm#CJABAIIF fix: http://dba.5341.com/msg/74892.html

Modify $CRS_HOME/bin/vipca and $CRS_HOME/bin/srvctl on all nodes

export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL
 * 1) add unset after export:

This will result in a different vipca error, which can be ignored: Running vipca(silent) for configuring nodeapps Error 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])]

Finish the configuration assistant. You will see the following error: OUI-25031:Some of the configuration assistants failed.

You will see that it is the "Oracle Cluster Verification Utility" that has failed. We will fix this next.

Close clusterware installer

Open a terminal in the GUI as root user and run 'vipca' (will need oracle paths setup as seen above)
 * Select bond0 for interface
 * Set IP Alias to "hdsbebdsg1-vip" hit tab and the rest should auto populate.
 * Fix subnet mask to be 255.255.255.192 to match the environment

After vipca finishes and closes, run the following to check stuats

Check cluster status: CSS appears healthy CRS appears healthy EVM appears healthy
 * 1) crsctl check crs

Other status commands: crs_stat -t oifcfg getif ocrcheck crsctl query css votedisk

Install Database
Set path to oracle database: export ORACLE_HOME=$DB_HOME

Run installer (as 'oracle' user): cat 10201_database_linux_x86_64.cpio.gz | gzip -d | cpio -dmvi cd database ./runInstaller

Choose "Enterprise Edition"

Use default settings: (from environment variables) Name: OraDb10g_home1 Path: /softw/app/oracle/product/10.2.0/db

Cluster Installation
 * click the "Select All" to select all nodes

Verify no errors with Prerequisite Check and continue.
 * There are 2 warnings: kernel settings and swap space

Configuraiton Option:
 * Select "Install database Software only"

Install Oracle Database software.

Run as root: (on each node, one at a time) /softw/app/oracle/product/10.2.0/db/root.sh

Fix DB srvctl on each node (just in case):
 * Modify $DB_HOME/bin/srvctl on all nodes

export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL
 * 1) add unset after export:

Install Clusterware Update
Set path to oracle: export ORACLE_HOME=$CRS_HOME

Stop cluster as root user: crsctl stop crs

Run patch installer: mkdir patch10204 cd patch10204 unzip ../p6810189_10204_Linux-x86-64.zip cd Disk1 ./runInstaller

Settings: Name: OraCrs10g_home Path: /softw/app/oracle/product/10.2.0/crs

Cluster Installation Mode:
 * All nodes should already be selected

Prerequisite Checks:
 * There should be zero errors and zero warnings

Install Patch

Root commands: (run on each node, one at a time) /softw/app/oracle/product/10.2.0/crs/bin/crsctl stop crs /softw/app/oracle/product/10.2.0/crs/install/root102.sh

Verify Cluster: crsctl check crs

Install Database Update
Set path to oracle: export ORACLE_HOME=$DB_HOME

Run patch installer as oracle user: cd patch10204/Disk1 ./runInstaller

Settings: Name: OraDb10g_home1 Path: /softw/app/oracle/product/10.2.0/db

Cluster Installation Mode:
 * All nodes should already be selected

Prerequisite Checks:
 * There should be zero errors and zero warnings

Manager Registration:
 * Ignore and continue

Install Patch

Root commands: (run on each node, one at a time) /softw/app/oracle/product/10.2.0/db/root.sh

Note: Overwrite each settings, just to be safe

Exit installer.

Verify Cluster: crsctl check crs

Add Remaining Nodes
Adding New Nodes to Your Oracle RAC 10g Cluster on Linux http://www.oracle.com/technology/pub/articles/vallath-nodes.html

ADD NODES - CLUSTER:

As 'oracle' user, run the following command on node1: cd $CRS_HOME/oui/bin ./addNode.sh

Add the reminaing nodes, and fix the 'private' interface to match the previous nodes.

As the 'root' user, run the following command on the added nodes: /softw/app/oracle/oraInventory/orainstRoot.sh

As the 'root' user, run this on node 1: /softw/app/oracle/oraInventory/orainstRoot.sh

As the 'root' user, run the following command on the added nodes: (run on all about same time) /softw/app/oracle/product/10.2.0/crs/root.sh

This command will wait for all nodes to join, so run without waiting for the command to finish.

ADD NODES - CLUSTER:

As 'oracle' user, run the following command on node1: cd $DB_HOME/oui/bin ./addNode.sh

The nodes should already be selected from the add cluster step above.

As the 'root' user, run the following command on the added nodes: /softw/app/oracle/product/10.2.0/db/root.sh

Migrate VIP Interface
NOTE: AS THE ROOT USER AND ON NODE1:

Cluster instance virtual IP addresses must be relocated to private1 network. To do so, we will create one file *.cap per instance registered in the cluster. The file must be named ora. .vip.cap and contain the following information:

To migrate the VIP network interfaces we will need to provide the updated information in files named ora. .vip.cap. These contain the profiles for the services. To retrieve the current VIP profiles: crs_stat -p ora.hdsbebdsg1.vip

for i in $( seq 1 8 ) ; do  crs_stat -p ora.hdsbebdsg$i.vip > $CRS_HOME/crs/profile/ora.hdsbebdsg$i.vip.cap done

Modify the following lines: NAME=ora.hdsbebdsdp1.vip        # no change - should be auto populated by creation command HOSTING_MEMBERS=hdsbebdsdp1     # no change - should be auto populated by creation command USR_ORA_IF=vlan200              # change from 'bond0' to 'vlan200' USR_ORA_NETMASK=255.255.255.192 # no change - should be auto populated by creation command USR_ORA_VIP=172.17.6.163        # change from ip address of 'bond0' to address of 'vlan200'

Stop all the cluster resources, unregister current VIP addresses and register the new ones.

From the first machine of the cluster run the next commands as root user. Stop all the cluster resources: crs_stop -all

Unregister VIP addresses and register the new ones:
 * 1) crs_unregister ora. .vip
 * 2) crs_register ora. .vip

for i in $( seq 1 8 ) ; do  crs_unregister ora.hdsbebdsg$i.vip crs_register ora.hdsbebdsg$i.vip done

Start all the necessary resources (one execution per resouce):
 * 1) crs_start

for i in $( seq 1 8 ) ; do  crs_start ora.hdsbebdsg$i.vip done

Start the remaining services crs_start -all

7715394
7715304 should be applied first: 7715304 - CRS 10.2.0.4 CRS Recommended Patch Bundle #3 Patch with specific requests, revise section 1.6.1

Stop the crs in all nodes such (as root):
 * 1) crsctl stop crs

for i in $( seq 1 8 ) ; do ssh 172.17.6.4$i "export ORACLE_HOME=/softw/app/oracle/product/10.2.0/crs \ PATH=$PATH:$ORACLE_HOME/bin ; crsctl stop crs" ; done

Pre-installation (as root): cd /softw/app/sw/patches cd 7715304 ./custom/scripts/prerootpatch.sh -crshome $CRS_HOME -crsuser oracle

for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/scripts/prerootpatch.sh -crshome \$CRS_HOME -crsuser oracle" ; done

Pre-installation of oracle clusterware (as oracle): cd 7715304 ./custom/scripts/prepatch.sh -crshome $CRS_HOME

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/scripts/prepatch.sh -crshome $CRS_HOME" ; done ; exit

Pre-installation of oracle database (as oracle): cd 7715304 ./custom/server/7715304/custom/scripts/prepatch.sh -dbhome $ORACLE_HOME

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$DB_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/server/7715304/custom/scripts/prepatch.sh -dbhome \$ORACLE_HOME" ; done ; exit

Ignore this error: Unable to determine value for ORACLE_BASE. Ignoring...

Installation of clusterware patch (as oracle): cd 7715304 opatch napply -local -oh $CRS_HOME -id 7715304

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  yes y | \$CRS_HOME/OPatch/opatch napply -local -oh \$CRS_HOME -id 7715304" ; done ; exit

Installation of database patch (as oracle): cd 7715304 opatch napply -local -oh $CRS_HOME -id 7715304

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$DB_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  yes y | \$DB_HOME/OPatch/opatch napply custom/server/ -local -oh \$ORACLE_HOME -id 7715304" ; done ; exit

Post-installation of oracle clusterware (as oracle): cd 7715304 ./custom/scripts/postpatch.sh -crshome $CRS_HOME

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/scripts/postpatch.sh -crshome \$CRS_HOME" ; done ; exit

Post-installation of oracle database (as oracle): cd 7715304 ./custom/server/7715304/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME

su - oracle for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$DB_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/server/7715304/custom/scripts/postpatch.sh -dbhome \$ORACLE_HOME" ; done ; exit

Post-installation root and start the clusterware (as root): cd 7715304 ./custom/scripts/postrootpatch.sh -crshome $CRS_HOME

for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  ./custom/scripts/postrootpatch.sh -crshome \$CRS_HOME" ; done

Finally, we verify everything is upwards with the next command (as root): $CRS_HOME/bin/crsctl check crs

for i in $( seq 1 8 ) ; do echo "" ; echo "====== 172.17.6.4$i ======" ; echo "" ; ssh 172.17.6.4$i \ "export ORACLE_OWNER=oracle ; \ export ORACLE_BASE=/softw/app/oracle ; \  export ORADATA=/softw/app/oracle/oradata ; \  export CRS_HOME=/softw/app/oracle/product/10.2.0/crs ; \  export DB_HOME=/softw/app/oracle/product/10.2.0/db ; \  export ORACLE_HOME=\$CRS_HOME ; \  export PATH=\$PATH:\$ORACLE_HOME/bin ; \  cd /softw/app/sw/patches ; cd 7715304 ; \  \$CRS_HOME/bin/crsctl check crs" ; done

7573282
Cluster:

As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$CRS_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch cd /softw/app/sw/patches ; cd 7573282 opatch napply -skip_subset -skip_duplicate ; exit

NOTE: this will require user intervention to answer 'y' and next hostnames.

DB:

As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch cd /softw/app/sw/patches ; cd 7573282 opatch napply -skip_subset -skip_duplicate ; exit

NOTE: this will require user intervention to answer 'y' and next hostnames.

If you get an error about conflicts, it will ask you to run a command similar to this:

opatch napply /softw/app/sw/patches/7573282 \ -id 4693355,6052226,6163771,6200820,6378112,7196894,7378661,7378735,7552067,7552082,7573282 \ -skip_duplicate

7612639
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 7612639 opatch napply -skip_subset -skip_duplicate ; exit

NOTE: this will require user intervention to answer 'y' and next hostnames.

NOTE: due to the interaction of this patch, and the number of patches, this will take a long long time.

7710551
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 7710551 opatch apply -silent ; exit

NOTE: Ignore the following warning:

The following warnings have occurred during OPatch execution: 1) OUI-67078:Interim patch 7710551 is a superset of the patch(es) [ 7691766 ] in OH /softw/app/oracle/product/10.2.0/db

OPatch Session completed with warnings.

5597450
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 5597450 opatch apply -silent ; exit

Expected result: OPatch succeeded.

6620371
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 6620371 opatch apply -silent ; exit

Expected result: OPatch succeeded.

5579764
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 5579764 opatch apply -silent ; exit

Expected result: OPatch succeeded.

5476091
As 'oracle' from node 1: (this will proceed to do all nodes) su - oracle export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

cd /softw/app/sw/patches ; cd 5476091 opatch apply -silent ; exit

Expected result: OPatch succeeded.

Verify Patches
NOTE: As the Oracle user (unless otherwise specified)

List patches for the specified oracle home: export ORACLE_HOME=...(db/crs) $CRS_HOME/OPatch/opatch lsinventory opatch lsinventory -oh $ORACLE_HOME

To apply patches read the included README.TXT

Verify specified patches: su - oracle

export ORACLE_OWNER=oracle export ORACLE_BASE=/softw/app/oracle export ORADATA=/softw/app/oracle/oradata export CRS_HOME=/softw/app/oracle/product/10.2.0/crs export DB_HOME=/softw/app/oracle/product/10.2.0/db export ORACLE_HOME=$DB_HOME export PATH=$PATH:$ORACLE_HOME/bin export PATH=$PATH:$ORACLE_HOME/OPatch

opatch lsinventory > patches.txt

for i in 7715304 7552042 7612639 7710551 5597450 6620371 5579764 5476091 ; do grep "$i" patches.txt > /dev/null if [ $? -eq 0 ] ; then echo "$i yes" else echo "$i no" fi done echo "Note: 7552042 is within 7573282 bundle"

Reboot all servers
for i in $( seq 2 8 ) ; do ssh 172.17.6.4$i "reboot" ; done reboot

keywords
Oracle RAC