Add/Extend Nodes to Existing 11g RAC Cluster
Recently I needed to extend one of our single node cluster with one additional node.
The host is prepared and was ready to be added to the cluster.
First of all we checked the node readiness for the addition as follows and proceed further only after those checks
[oracle@drracnode3 bin]$cluvfy stage
-pre nodeadd -n
drracnode2 -verbose
[oracle@drracnode3 bin]$cluvfy stage -pre crsinst -n
drracnode2 –verbose
[oracle@drracnode3
~]$ cd /u01/app/11.2.0.2/grid/oui/bin/
[oracle@drracnode3
bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@drracnode3
bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={drracnode2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={drracnode2-vip}"
Starting
Oracle Universal Installer...
Checking
swap space: must be greater than 500 MB.
Actual 1323 MB Passed
Oracle
Universal Installer, Version 11.2.0.2.0 Production
Copyright
(C) 1999, 2010, Oracle. All rights reserved.
Performing
tests to see whether nodes drracnode2 are available
...............................................................
100% Done.
-----------------------------------------------------------------------------
Cluster
Node Addition Summary
Global
Settings
Source: /u01/app/11.2.0.2/grid
New Nodes
Space
Requirements
New Nodes
drracnode2
/: Required 3.58GB : Available 22.01GB
Installed
Products
Product Names
Oracle Grid Infrastructure 11.2.0.2.0
Sun JDK 1.5.0.24.08
Installer SDK Component 11.2.0.2.0
Oracle One-Off Patch Installer 11.2.0.0.2
Oracle Universal Installer 11.2.0.2.0
Oracle Recovery Manager 11.2.0.2.0
...............................................................
100% Done.
Instantiating
scripts for add node (Thursday, April 19, 2012 1:35:38 PM IST)
Instantiation
of add node scripts complete
Copying
to remote nodes (Thursday, April 19, 2012 1:35:41 PM IST)
.....................................................................................
Home
copied to new nodes
Saving
inventory on nodes (Thursday, April 19, 2012 1:50:54 PM IST)
.
100% Done.
Save
inventory complete
WARNING:
The
following configuration scripts need to be executed as the "root"
user in each cluster node.
/u01/app/11.2.0.2/grid/root.sh
#On nodes drracnode2
To
execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The
Cluster Node Addition of /u01/app/11.2.0.2/grid was successful.
Please
check '/tmp/silentInstall.log' for more details.
[root@drracnode2 ~]# /u01/app/11.2.0.2/grid/root.sh
Running Oracle 11g root script...
The following environment variables are set as:
ORACLE_OWNER=
oracle
ORACLE_HOME=
/u01/app/11.2.0.2/grid
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
The contents of "dbhome" have not changed. No
need to overwrite.
The file "oraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv
to /usr/local/bin ...
The file "coraenv" already exists in
/usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv
to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed
by
Database Configuration Assistant when a database is
created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.2/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL
ADD MODE
Creating
OCR keys for user 'root', privgrp 'root'..
Operation
successful.
OLR
initialization - successful
Adding
daemon to inittab
ACFS-9200:
Supported
ACFS-9300:
ADVM/ACFS distribution files found.
ACFS-9312:
Existing ADVM/ACFS installation detected.
ACFS-9314:
Removing previous ADVM/ACFS installation.
ACFS-9315:
Previous ADVM/ACFS components successfully removed.
ACFS-9307:
Installing requested ADVM/ACFS software.
ACFS-9308:
Loading installed ADVM/ACFS drivers.
ACFS-9321:
Creating udev for ADVM/ACFS.
ACFS-9323:
Creating module dependencies - this may take some time.
ACFS-9327:
Verifying ADVM/ACFS devices.
ACFS-9309:
ADVM/ACFS installation correctness verified.
CRS-4402: The CSS
daemon was started in exclusive mode but found an active CSS daemon on node
drracnode3, number 1, and is terminating
An active cluster
was found during exclusive startup, restarting to join the cluster
After
waiting for some time we checked the crsd.log file and it shows that asm system
was not mounted with
ORA
1031 – Insufficient privileges
Here the OS user ‘oracle’
was not part of dba group hence insufficient privilege error was thrown
#usermod –g oinstall –G
dba oracle
Another thing was that you
don’t really need password file in this ASM instance unless you are doing
password authentication. So you can remove that if needed...
Issue -
In
11.2, when logging into ASM using 'sysdba' you receive the following error:sqlplus '/ as sysdba'
SQL*Plus: Release 11.2.0.1.0 Production on Thu Oct 22 13:32:50 2009
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Automatic Storage Management option
SQL> shutdown immediate;
ORA-01031: insufficient privileges
Cause
Commands that you ran using the SYSDBA privilege on ASM 11g r1 and below have now been deprecated in release 11g r2.
Starting with 11g release 2, Oracle ASM administration must be
done with the SYSASM privilege.
Solution
Solution
You will now need to connect as SYSASM to perform any
administrative operations on the ASM instance or we will get an ORA-01031 error
as shown below :
ORA-01031: insufficient privileges
Hence we tried connecting to ASM as follows and started the ASM instance manually
[oracle@drracnode3 ~]$ cd /u01/app/11.2.0.2/grid/bin/
[oracle@drracnode3 ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM2
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@drracnode3 ~]$ ./sqlplus / as sysasm
SQL> startup
[oracle@drracnode3 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.2.0 Production on Thu May 10 14:37:59 2012
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
SQL> exit;
Once the ASM is up and running we tried starting the CRS as follows
[oracle@drracnode3 ~]$ crsctl start crs
and CRS came up successfully on the node.
No comments:
Post a Comment