Migrating 2 node RAC Cluster to new H/W and Adding a Failed Node back in RAC Cluster (#OracleRACAddNode, #OracleRAC)


RAC Cluster Database which was setup on 2 nodes, needs a migration to new H/W. Also, adding to this, the 2nd node of RAC Cluster is dead just before this activity was started (& this 2nd node cannot be started anymore because of H/W Failure)

Path Chosen for this activity (At a high Level):

  1. Complete Server Migration from ‘node1 and node2’ to ‘node3 and node4’
  2. Migrate VIPs, IPs of node1/2 on node3/4
  3. Flip the Hostnames between node1/2 and node3/4
  4. Rsync OH file systems from node1 to node3 (as node2 is dead – we cannot rsync to node4)
  5. Slide the ASM/diskgroups from old to new servers
  6. Start DB on node1(new)
  7. Delete 2nd node from Cluster and DB
  8. Add 2nd node to Cluster and DB

Detailed Steps:

The 1st 5 steps will be taken care by Unix admins

Start DB on node1:

Faced quite a few issues. Below are the details,

  1. Services will not come up automatically, because we need to manually Configure/register CRS services with OS processes (init.d)
  2. Ohas services were down, to bring this up performed the below,
    1. Deconfig node that have problem with rootcrs.pl
      ./rootcrs.pl -verbose -deconfig -force
    2. Start manually ohasd – No luck even after performing
      nohup ./init.ohasd run &
    3. Run root.sh from $GRID_HOME

This brought up OHAS Daemon

3. Starting the CRS Services,

  1. The CRS Services didn’t come up, and noticed that the network interfaces were not properly set.

Fix: Requested unix admins to match the network interfaces same like old servers.

Always refer “$GRID_HOME/gpnp/profiles/peer/profiles.xml” to check if /sbin/ifconfig has entries of what is mentioned in profiles.xml

Restarted CRS Services,

./crsctl stop crs –f


./crsctl start crs

Which started the CRS and CTSS and Monitoring Deamons and also Database

Now as we have one node of RAC Cluster up and running, we need to Remove the 2nd node (dead node) from Cluster and add it back.

Delete 2nd node from Cluster and DB:

Execute the Delete commands from 1st node as below,


[root@node1 bin]# ./crsctl delete node -n node2

CRS-4661: Node node2 successfully deleted.

2. Update Node list with the available node in CRS

./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={node1}” CRS=true

3. Update node list with the available node in RDBMS

./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME “CLUSTER_NODES={node1}”

Once Deletion is done, add the node back to Cluster,

Add 2nd node to Cluster and DB:

Use the addnode.sh command in 1st node to add the 2nd node

Navigate to $GRID_HOME/addnode and execute the below command,

./addnode.sh -silent “CLUSTER_NEW_NODES={node2}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}”


This entry was posted in Oracle Database, Oracle RAC. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s