Wednesday, May 2, 2012


Upgrade to Oracle Grid Infrastructure 11g Release 2

i.        Back Up the Oracle Software Before Upgrades


Before you make any changes to the Oracle software, Oracle recommends that you create a backup of the Oracle software and databases

ii.        Unset Oracle Environment Variables


If you have had an existing installation on your system, and you are using the same user account to install this installation, then unset the following environment variables: ORA_CRS_HOME; ORACLE_HOME; ORA_NLS10; TNS_ADMIN; and any other environment variable set for the Oracle installation user that is connected with Oracle software homes.
 For details, refer to Document 952925.1 - NETCA & ASMCA Fail during Upgrade of CRS/ASM to Grid Infrastructure 11gR2

iii.        Restrictions for Clusterware and Oracle ASM Upgrades


·         To upgrade existing Oracle Clusterware installations to Oracle Grid Infrastructure 11g, your release must be greater than or equal to 10.1.0.5, 10.2.0.3, 11.1.0.6, or 11.2.
·         To upgrade existing 11.2.0.1 Oracle Grid Infrastructure installations to Oracle Grid Infrastructure 11.2.0.2, you must first verify if you need to apply any mandatory patches for upgrade to succeed.
·         Oracle Clusterware and Oracle ASM upgrades are always out-of-place upgrades. With 11g release 2 (11.2), you cannot perform an in-place upgrade of Oracle Clusterware and Oracle ASM to existing homes.
·         During a major version upgrade to 11g release 2 (11.2), the software in the 11g release 2 (11.2) Oracle Grid Infrastructure home is not fully functional until the upgrade is completed. Running srvctl, crsctl, and other commands from the 11g release 2 (11.2) home is not supported until the final rootupgrade.sh script is run and the upgrade is complete across all nodes.
·         Check "Oracle 11gR2 Upgrade Companion" MOS Note 785351.1

iv.        Preparing to Upgrade an Existing Oracle Clusterware Installation


·         Run Cluster Verification Utility for each node to ensure that you have completed preinstallation steps. It can generate Fix up scripts to help you to prepare servers

·         If you have environment variables set for the existing installation, then unset the environment variables $ORACLE_HOME and $ORACLE_SID, as these environment variables are used during upgrade. For example:
o    $ unset ORACLE_BASE
o    $ unset ORACLE_HOME
o    $ unset ORACLE_SID

·         The CSS parameter "diagwait" ("crsctl get css diagwait") should either be unset(output shows "Configuration parameter diagwait is not defined") or set to a small value (13 or less) to avoid issue in Document 1102283.1 - 11gR2 rootupgrade.sh Fails as cssvfupgd Cannot Upgrade Voting Disk

v.        Validate Readiness for Oracle Clusterware Upgrades


·         Running runcluvfy.sh with the -pre crsinst -upgrade flags performs system checks to confirm if the cluster is in a correct state for upgrading from an existing clusterware installation.
 ./runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling 
-src_crshome  /u01/app/grid/11.2.0.1 -dest_crshome /u01/app/grid/11.2.0.2
-dest_version 11.2.0.3.0 -fixup -fixupdirpath /home/grid/fixup -verbose

vi.        Rolling upgrade Issues


There are some bugs with rolling upgrade issues hence it is recommended to have non-rolling upgrade if possible to avoid such issues.
Pre-requisite Patch -
Patch the release 11.2.0.1 Oracle Grid Infrastructure home with the 9413827 patch, and install Oracle Grid Infrastructure Patchset Update 1 (GI PSU1). When you apply patch 9413827, it shows up in the inventory as GIPSU2 bug 9655006.

Known Bug -

Bug 10036834 - Linux Platforms: Patches not found upgrading Grid Infrastructure from 11.2.0.1 to 11.2.0.2 [ID 10036834.8]

Background

Grid Infrastructure upgrade from 11.2.0.1 to 11.2.0.2 can fail with messages like the following on Linux and Linux 64 bit platforms:
 
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
 
The fixes for bug 9413827 are not present in the 11.2.0.1 crs home. Apply the patches for these bugs in the 11.2.0.1 crs home and then run rootupgrade.sh again on the failed node
 
It has been observed that this occurs even if the patch for bug 9413827 or bug 9655006 is installed.
 
The problem is that the upgrade test actually tests for the presence of patch 9655006 and if not found reports that 9413827 is not installed.
 
Workaround
 Install patch 9655006 to the 11.2.0.1 GI home before Upgrading to 11.2.0.2
 
 


Patching Oracle GI to 11.2.0.2
Create a new directory /u01/app/11.2.0.2/grid for Oracle 11.2.0.2 GI on all nodes.
#mkdir –p /u01/app/11.2.0.2/grid
#chmod –R 775 /u01/app/11.2.0.2/
#chown –R oracle:oinstall /u01/app/11.2.0.2/
   
Download and unzip patch 10098816 into a stage directory in my case /u01/stage/11.2.0.2.
   
Start the installer from /u01/stage/11.2.0.2/grid. Either enter MOS credentials to check for updates or select ‘Skip software updates’. Press Next to continue.
   





Select Upgrade Oracle Grid Infrastructure or Oracle ASM and press Next to continue.


Select appropriate language(s) you tend to support and click Next.



Select the nodes you wish to upgrade. For the standalone upgrade only one server will be available. In case for cluster env you don’t have the choice to upgrade partially.


Keep the defaults and press Next to continue.


Enter the new location for Oracle GI 11.2.0.2 as /u01/app/11.2.0.2/grid and press Next to continue.



Wait for the prerequisite checks to complete.

Check the failure and find out the reasons why they failed. In this case checks are failed due to lack of memory and NTP configuration, which can be safely ignored.


Review the Summary and press Install to continue.


Wait for the installation to complete.



Run rootupgrade.sh script first on node1 and after that on node2 and so on. 

Output of rootupgrade.sh on the cluster nodes is here



Once rootupgrade.sh finishes on the first node CRS will look like following..
[root@drracnode3 bin] ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@drracnode3 bin] ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]
[root@drracnode3 bin] ./crsctl query crs softwareversion
Oracle Clusterware version on node [drracnode3] is [11.2.0.2.0]

However when you try to check CRS resources it will fail coz upgrade on all nodes hasn’t completed yet...

[root@drracnode3 bin] ./crsctl stat res -t
CRS-601: Internal error
RC: 5, File: clsEntityDisplay.cpp, Line: 265
CRS-4000: Command Status failed, or completed with errors.


Once rootupgrade.sh is finished on second node, CRS will look like as follows..

[oracle@drracnode2 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[oracle@drracnode2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.2.0]
[oracle@drracnode2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [drracnode2] is [11.2.0.2.0]





Wait for the Oracle GI installation to complete.





Oracle CVU failed due memory and ntp configuration issues which can be safely ignored.
Apart from that all the key components are up and healthy and inventory is also updated properly.







Click on close to finish the upgrade.




Verify the cluster integrity on all nodes...

[oracle@drracnode2 ~]$ crsctl check cluster -all
**************************************************************
drracnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
drracnode3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Issues faced During rootupgrade.sh –
Error 1 –
Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256
Cause – This happens when 9900556 patch is not applied on source GI Home. However oracle support says that if the patch is applied and error still persist, one can ignore it safely.
Issues with ASM user group –
Error 2 –
Start of resource "ora.asm" failed
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'drracnode3'
CRS-2676: Start of 'ora.drivers.acfs' on 'drracnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'drracnode3'
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-03113: end-of-file on communication channel
Process ID: 0
Session ID: 0 Serial number: 0CRS-2674: Start of 'ora.asm' on 'drracnode3' failed

Cause –

Here is the cause that ASM instance failed to start is due the fact that the group was mentioned different (“DBA” and not the as the source GI Home i.e. “OINSTALL”) .
Hence required the change in permission for ASM disk and after that ASM instance started successfully.

Error 3 –
CRS-4046 happens when running root.sh

Creating trace directory
CRS-4046: Invalid Oracle Clusterware configuration.
CRS-4000: Command Create failed, or completed with errors.
Failure initializing entries in /etc/oracle/scls_scr/racnode1
/u01/app/11.2.0.2/grid/perl/bin/perl -I/u01/app/11.2.0.2/grid/perl/lib -I/u01/app/11.2.0.2/grid/crs/install /u01/app/11.2.0.2/grid/crs/install/rootcrs.pl execution failed


Cause and Fix –
Either reboot the node or execute the following to stop any processes that's still running from GRID_HOME:
# ps -ef| grep <grid-home>
i.e.: # ps -ef| grep "/ocw/grid"

If there's still processes, kill them with "kill -9" command

If setting up Grid Infrastructure cluster, on the node where the error was reported, as root, execute the following:
# <grid-home>/crs/install/rootcrs.pl -deconfig -force -verbose
If setting up Grid Infrastructure Standalone (Oracle Restart), as root, execute the following:
# <grid-home>/crs/install/roothas.pl -deconfig -force -verbose

If the output is similar to the following:

..
CRS-4046: Invalid Oracle Clusterware configuration.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware          #
################################################################
ACFS-9313: No ADVM/ACFS installation detected.
Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

Once the above reboot or stop is done, as root, execute root.sh from GRID_HOME:

# <grid-home>/root.sh


Upgrading the Oracle Binaries from 11.2.0.1 to 11.2.0.2
Start the ./runInstaller from 11.2.0.2 staging folder from one of the cluster nodes
            #./runInstaller
Skip the security updates



Select the option “Install the DB Software only” option if you plan to upgrade the DB later


Select the nodes in the cluster on which you need to install the software



Select the Enterprise Edition

Enter the Oracle Base and Software Location




Enter OSDBA and OSOPER group

Review and fix the pre-requisites if failed any




Review the final status page and click on Install


Check the progress



At the end you will be asked to run root.sh on cluster nodes in specific order, run the root.sh in that order only


Output of root.sh script execution on cluster nodes
[root@drracnode3 dbhome_2]# pwd
/u01/app/oracle/product/11.2.0/dbhome_2
[root@drracnode3 dbhome_2]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.


[root@drracnode2 dbhome_2]# pwd
/u01/app/oracle/product/11.2.0/dbhome_2
[root@drracnode2 dbhome_2]# ./root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/dbhome_2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@drracnode2 dbhome_2]#

When done click on Close tab to finish the install



Upgrading the DB
To run the Pre-Upgrade tool the environment should be set like this:
$ORACLE_HOME = Oracle Home which you are planning to upgrade (Old Oracle Home).
$ORACLE_SID = SID of the database being upgraded.
$PATH = should point to the original/old Oracle Home.
Copy the script utlu112i.sql from 11gR2 ORACLE_HOME/rdbms/admin to another directory say /tmp, change to that directory and start sqlplus. Run the script and view the output.

**********************************************************************
Database:
**********************************************************************
--> name:          ORA10g
--> version:       10.2.0.2.0
--> compatible:    10.2.0.2
--> blocksize:     8192
--> platform:      Linux IA (32-bit)
--> timezone file: V4
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
WARNING: --> SYSTEM tablespace is not large enough for the upgrade.
.... currently allocated size: 560 MB
.... minimum required size: 910 MB
.... increase current size by: 350 MB
.... tablespace is NOT AUTOEXTEND ENABLED.
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 457 MB
.... AUTOEXTEND additional space required: 352 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 617 MB
.... AUTOEXTEND additional space required: 287 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
.... AUTOEXTEND additional space required: 41 MB
--> EXAMPLE tablespace is adequate for the upgrade.
.... minimum required size: 69 MB
.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
WARNING: --> "plsql_compiler_flags" old value was "INTERPRETED";
new name is "plsql_code_type" new value is "INTERPRETED"
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
--> "max_enabled_roles"
--> "remote_os_authent"
--> "background_dump_dest" replaced by "diagnostic_dest"
--> "user_dump_dest" replaced by "diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> Messaging Gateway            [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> Oracle Label Security        [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Data Mining                  [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
--> Oracle Application Express   [upgrade]
--> Oracle OLAP API              [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 11g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
....   SYS
....   WMSYS
....   CTXSYS
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER PUBLIC has 7 INVALID objects.
.... USER SYS has 1 INVALID objects.
WARNING: --> Database contains schemas with objects dependent on network packages.
.... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.
.... USER WKSYS has dependent objects.
.... USER SYSMAN has dependent objects.
WARNING:--> A standby database exists.
.... Sync standby database prior to upgrade.
WARNING:--> recycle bin in use.
.... Your recycle bin is turned on and it contains
.... 3 object(s).  It is REQUIRED
.... that the recycle bin is empty prior to upgrading
.... your database.
.... The command:  PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.




Go to newly installed 11.2.0.2 oracle home and change to bin directory

            #./dbua
Select the database you need to upgrade



Select the required option as per your need








Select respective disk group for flash_recovery_area and define size




Review the summary and click Finish








Review the progress







Check the final status and click OK








Check the final upgrade summary and click Close




Check the status of the DB

[oracle@drracnode3 upgrade1]$ . oraenv
ORACLE_SID = [oracle] ? wiuauto
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@drracnode3 upgrade1]$ srvctl  status database -d wiuauto
Instance wiuauto2 is running on node drracnode2
Instance wiuauto1 is running on node drracnode3

[oracle@drracnode3 upgrade1]$ . oraenv
ORACLE_SID = [wiuauto1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@drracnode3 upgrade1]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Mon Apr 23 15:33:07 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
Other Known Issues with 11.2.0.2 Upgrade..

Start OUI to Install GI 11.2.0.2

OUI calls CVU to check most components relevant for the clusterware installation, for common 11gR2 GI OUI errors and explanations/solutions, refer to note 1056713.1; to turn on debugging for OUI, refer to note 1056322.1


Below is a list of various OUI and CVU errors/warnings and explanations/solutions.


  • Document 887471.1 - PRVF-4664 PRVF-4657: Found inconsistent name resolution entries for SCAN name
  • Document 1267569.1 - PRVF-5449 : Check of Voting Disk location "ORCL:<diskname>(ORCL:<diskname>)" failed
  • Document 1233505.1 - Checklist for PRVF-10037 : Failed to retrieve storage type for xx on node xx
  • Document 1271996.1 - 11.2.0.2 Grid Install Fails with SEVERE: [FATAL] [INS-13013], and PRVF-5640 or a Warning in "Task resolv.conf Integrity"
  • Document 1051763.1 - INS-20802 PRVF-4172 Reported after Successful Upgrade to 11gR2 Grid Infrastructure
  • Document 1056195.1 - INS-20702 Reported during 11gR2 Installation on getSharedPartitionListCVU
  • Document 970166.1 - INS-20702 "checkFreeDiskSpace" Reported During 11gR2 Installation
  • Document 1056693.1 - How to Configure NTP or Windows Time to Resolve CLUVFY Error PRVF-5436 PRVF-9652
  • Document 974481.1 - INS-20802 PRVF-9802 PRVF-5184 PRVF-5186 Reported after Successful Upgrade to 11gR2 Grid Infrastructure


Patch 11.2.0.2 GI Before Executing rootupgrade.sh

GI bundle patches/PSUs are accumulative and contain fixes to the most critical issues, it is recommended to apply the latest available bundle patch/PSU to avoid: 1). known issues that may prevent the root script from succeeding. 2). known issues that are not particular to the upgrade but could happen while the clusterware is being upgraded. At the time of this writing, PSU 2 patch 12311357 is the latest one.

The latest PSU patch is GI PSU 11.2.0.2.5 Patch 13653086 (this includes  DB PSU 112025 patch 13343424)

To apply patches to the 11.2.0.2 GI home before the rootupgrade.sh script is executed, only run the "opatch napply" command. For example, to apply PSU 2, as grid user execute the following on all nodes:

$ <11.2.0.2GI_HOME>/OPatch/opatch napply -oh <11.2.0.2GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/12311357
$ <11.2.0.2GI_HOME>/OPatch/opatch napply -oh <11.2.0.2GI_HOME> -local <UNZIPPED_PATCH_LOCATION>/11724916


  • One of the top issues that causes the rootupgrade.sh script to fail in 11.2.0.2 is multicast not working for group 230.0.1.0, Patch:9974223 introduces support for the additional group 224.0.0.251 and has been included in bundle 1 and above, refer to Document 1212703.1 for more details. Patch:9974223 may cause issue on AIX, refer to Document 1329597.1 for solutions.
  • The failure to start HAIP may cause the rootupgrade.sh script to fail, Bug:11077756 will allow the script to continue even if HAIP fails to start, the fix has been included in bundle 2 and above. For more issues in the HAIP area, refer to Document 1210883.1 for more details.
  • Failure to install Oracle Kernel Service could cause the rootupgrade.sh script to fail, for details, refer to Document 1265276.1 - ACFS-9327 ACFS-9121 ACFS-9310: ADVM/ACFS installation failed
  • ora.crf resource may fail to start on Solaris with "CRS-2674: Start of 'ora.crf' on '<nodename>' failed", refer to Document 1289922.1 for details
  • Watch out root script output for "Configure Oracle Grid Infrastructure for a Cluster ... failed", if it shows up, it means that the root script has failed and corrective action is needed, refer to Document 969254.1 for details.

Execute rootupgrade.sh

When switching to the root user in order to execute rootupgrade.sh, "su -" or "su - root" provides the full root environment, while sudo, pbrun, "su root" or "su" or similar facilities don't always do the same. It is recommended to execute rootupgrade.sh with full root environment to avoid issues documented in the following notes:


  • Document 1315203.1 - ACFS Drivers Fail To Install During Root.Sh Execution Of 11.2.0.2 GI Standalone On AIX
  • Document 1235944.1 - 11gR2 root.sh Fails as crsd.bin Does not Come up due to Wrong ulimit
  • Document 1259874.1 - root.sh Fails as the ora.asm resource is not ONLINE or PROTL-16 due to Wrong umask
  • Document 1141963.1 - 11gR2 rootupgrade.sh Fails as Environment Variable PATH Points to Wrong crsctl First

If rootupgrade.sh fails, refer to 
Document 1050908.1 and Document 1053970.1 for troubleshooting steps.


Patch 11.2.0.2 GI After Executed rootupgrade.sh

Here's a list of known critical issues that are affecting GI 11.2.0.2, many of them have been fixed by latest PSU.


  • Bug:11871469 - ORAAGENT CHECK TASK IS TIMING OUT WHICH IS FORCING THE AGENT TO ABORT AND EXIT, fixed in 11.2.0.3, 12.1 and one-off Patch:12347844 exists
  • Bug:10034417 - OHASD.BIN TAKING 95-100% CPU ON AN IDLE SYSTEM, fixed 11.2.0.2 Bundle2, 11.2.0.3
  • Bug:10374874 - RBAL GOT UNRESPONSIVE WAITING FOR A RESPONSE FROM OCSSD, fixed in 11.2.0.3, one-off patches exist
  • Bug:10131381 - PROCESS PERSISTS AFTER INSTANCE SHUTDOWN, fixed in 11.2.0.2 Bundle1, 11.2.0.3
  • Bug:9336825 - Repeated error "CRS-2332:Error pushing GPnP profile to "mdns:service:gpnp._tcp.local.://racnode1:16739/agent=gpnpd,cname=crs,host=racnode1,pid=17182/gpnpd h:racnode1 c:crs"" in clusterware alert<nodename>.log, fixed in 11.2.0.2 bundle2, 11.2.0.3
  • Bug:9897335 - Instance alert.log Flooded With "NOTE: [emcrsp.bin@racnode1 (TNS V1-V3) 3159] opening OCR file", refer to Document 1307063.1 for details.
  • Bug:10056593 - Failed to add (property/value):('OLD_OCR_ID/'-1') for checkpoint:ROOTCRS_OLDHOMEINFO.Error code is 256, fixed in 11.2.0.3, the warning is ignorable
  • Bug:10190153 - ORA.CTSSD AND ORA.CRSD GOES OFFLINE AFTER KILL GIPC ON CRS MASTER, fixed in 11.2.0.2 Bundle3, 11.2.0.3
  • Bug:10371451 - CSSD aborting from thread GMClientListener, refer to Document 1306137.1 - "ocssd.bin Fails to Start: clssgmPeerListener: connected to 1 of 3" for details

For known issues that affect each bundle, refer to 
Document 1272288.1 - 11.2.0.2.X Grid Infrastructure Bundle Known Issues


Miscellaneous


Bug:10205230 - ORA-600 or Data Corruption possible during shutdown normal/transactional/immediate of RAC instances in a rolling fashion, refer to Document 1318986.1 for details

Bug:10121931 - DBCA CONFIGURE DATABASE OPTION DISABLED IF 11201 DATABASE PRESENT, refer to Document 10121931.8 for more details, one-off patches exist

Bug:11069614 - RDBMS INSTANCE CRASH DUE TO SLOW REAP OF GIPC MESSAGES ON CMT SYSTEMS, refer to Document 1287709.1 - "ocssd.bin High CPU Usage and Instance Crashes With ORA-29770" for details

Refer to 
Document 1179474.1 for 11.2.0.2 Patch Set Availability and Known Issues

Refer to 
Document 948456.1 for Known Issues Between Pre-11.2 Database and 11gR2 Grid Infrastructure 

Refer to 
Document 810394.1 for RAC and Oracle Clusterware Starter Kit and Best Practices