Tuesday, September 9, 2014



Convert Single Instance DB to RAC

 Why do we need this?


There are requests from Apps team to load humongous amount of data from Production to Dev environments. There are several ways like Data Pump, Passive & Active database duplication or restore from Prod Backup. They all have one or more drawback with either efficiency or dependency on other team or need of configuration changes.  All this factors are not always under DBA control. Hence we do database duplication from backup, which doesn’t involve Net Backup tool.

DB Duplication works only when you have single instance databases and it does not support the RAC databases. So in a situation where one have to create replica of production configuration, there are more than good % of chances that you won’t be able to use DB duplication. So to come around this issue, we first do single instance DB duplication and then turn the single instance database to RAC.

What are the ways?

There are several ways to do it. 2 of them are using tools provided by Oracle (easy & safe) and third way is to do manual.

1.       RCONFIG

2.       DBCA

3.       Manual Conversion


RCONFIG & DBCA are great tool and useful when you DB files residing on normal disks (non-shared storage).  So when one starts conversion, along with conversion, these tools also copy the files from non-shared storage to shared storage. That saves efforts on DBAs part to move the files around.

Why Manual Conversion?

As stated, RCONFIG & DBCA will create an additional copy of data-files. However, when dealing the sizes of few hundreds of GB, this step takes the maximum amount of time in whole process. What if the single instance DB itself is using shared storage to keep files? In this scenario, the whole copy process is worthless as we already are on shared storage. Hence to circumvent this part, manual conversion  is the preferred method in our case.




Process of Conversion –

Following process was used to convert database to RAC on RAC1 cluster. Pls substitute the parameter files name and other init parameters as per your environment. There are certain application specific parameters (copied from Prod), that one may not need but for example purpose, I have kept them intact.



1.       As we start the manual conversion, first we need to identify the parameters needed for cluster database. Along with that also identify the parameter one needs from Prod DB.

Connect to database that you wish to convert and create pfile, if it’s not already there.

SQL> create pfile from spfile;

SQL>  shut immediate;



2.       Go to parameter file and add following cluster specific parameters. Here we are converting database to 2 RAC instances.

### Cluster specific parameters

*.cluster_database = TRUE

*.cluster_database_instances = 2

*.undo_management=AUTO

HSPRF1.INSTANCE_NUMBER=1

HSPRF2.INSTANCE_NUMBER=2

HSPRF2.instance_name=HSPRF2

HSPRF1.instance_name=HSPRF1

HSPRF1.THREAD=1

HSPRF2.THREAD=2

HSPRF1.UNDO_TABLESPACE=UNDOTBS1

HSPRF2.UNDO_TABLESPACE=UNDOTBS2

*.control_files='+HS_RAC1/HSPRF/controlfile/current.536.855600671'

### Application  specific parameters

*._gc_read_mostly_locking=FALSE

*.cpu_count=4

*.memory_target=4G

*.open_cursors=300

*.parallel_force_local=TRUE

*.parallel_max_servers=8

*.parallel_servers_target=4

*.processes=600

*.recyclebin='OFF'

*.undo_retention=900

*.remote_listener='rac1.xxx.com:1529'

 Save the file as initHSPRF1.ora.

Also notice the control file location also needs to be modified as per your location.


 3.       Now create the spfile from pfile.



[oracle@rac1b dbs]$ . oraenv

ORACLE_SID = [HSPRF1] ?

 [oracle@rac1b dbs]$ sqlplus  / as sysdba

Connected to an idle instance.



create spfile='+HS_RAC1/HSPRF/spfileHSPRF.ora' from pfile;

File created.



4.       Now save the existing initHSPRF1.ora file with new different name.



[oracle@rac1b dbs]$  mv initHSPRF1.ora initHSPRF1.ora.bkp

## Add following line to initfile.

[oracle@rac1b dbs]$ vi initHSPRF1.ora

spfile='+HS_RAC1/HSPRF/spfileHSPRF.ora'



Save and exit.



5.       Create password file for instance.



[oracle@rac1b dbs]$  orapwd file=orapwHSPRF1 password=XXXX





6.       Start the database in mount mode using parameter file.



SQL> startup mount pfile=initHSPRF1.ora

ORACLE instance started.

…..

Database mounted.



7.       Now modify the existing Redo log group and create new thread. Pls make sure, the log group you are removing is not current.



   SQL> ALTER DATABASE DROP LOGFILE GROUP 1;

   SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 1;



   SQL> ALTER DATABASE DROP LOGFILE GROUP 2;

   SQL> ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 2;



   SQL> ALTER DATABASE DROP LOGFILE GROUP 3;

   SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3;

   SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 4;



8.       Open the Database and enable thread 2.



 SQL> ALTER DATABASE OPEN;

 Database altered.

SQL> ALTER DATABASE ENABLE THREAD 2;



9.       Create UNDO tablespace, if it doesn’t already exist.


SQL>  CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '+HS_RAC1/HSPRF' SIZE 500M AUTOEXTEND ON NEXT 1m MAXSIZE 30G;



10.   Create temp data-files if they are not already there.



SQL> alter tablespace temp  add tempfile '+HS_RAC1' size 50m autoextend on next  1m maxsize 30G;



11.   Run following script to create cluster db views, grant and sysnonyms.



SQL> @?/rdbms/admin/catclust.sql

View created.

Grant succeeded.

……

…..

Synonym created.

PL/SQL procedure successfully completed.



12.   On second node set ORACLE_HOME & ORACLE_SID to second instance.



[oracle@rac1c dbs]$ ]$ . oraenv

ORACLE_SID = [HSPRF2] ?

[oracle@rac1cdbs]$ vi initHSPRF2.ora

spfile='+HS_RAC1/HSPRF/spfileHSPRF.ora'



Save and exit.



13.   Create password file for instance.



[oracle@rac1c dbs]$  orapwd file=orapwHSPRF2 password=XXXX



Start the second instance.

SQL> startup pfile=initHSPRF2.ora

ORACLE instance started.



Database mounted.

Database opened.



14.   Register the database & instances with OCR



[oracle@rac1c dbs]$  srvctl add database -d HSPRF -o /u01/app/oracle/product/11.2.0/dbhome_1  -r primary -s OPEN -y AUTOMATIC -p '+HS_RAC1/HSPRF/spfileHSPRF.ora'



[oracle@rac1c dbs]$  srvctl add instance -d HSPRF -i HSPRF1 -n rac1b

[oracle@rac1c dbs]$  srvctl add instance -d HSPRF -i HSPRF2 -n rac1c



15.   Shut down both the instances via SQL PLUS and try starting them from srvctl



[oracle@rac1c dbs]$  srvctl start database  -d HSPRF




Errors & Troubleshooting –



While adding the DB and instances to OCR, I came across following issue. This issue is registered bug with 11.2.0.2, which was fixed with 11.2.0.3. Not sure why we came across as we already are on 11.2.0.3 


Error –



[oracle@rac1b dbs]$ srvctl add database -d HSPRF -o /u01/app/oracle/product/11.2.0/dbhome_1 -c RAC -r primary -s OPEN -y AUTOMATIC -p '+HS_RAC1/HSPRF/spfileHSPRF.ora'

PRCS-1007 : Server pool HSPRF already exists

PRCR-1086 : server pool ora.HSPRF is already registered



# Whilst checking if we had any server pool with the same name



[oracle@rac1b dbs]$ srvctl  status srvpool -g HSPRF

PRKO-3160 : Server pool HSPRF is internally managed as part of administrator-managed database configuration and therefore cannot be queried directly via srvpool object.



MOS document - PRCS-1007 PRCR-1086 - Resources in OCR Are not Cleaned up Completely (Doc ID 1108023.1)

As per the document the work around to fix the issue was as follows but did not work in our case.



 crsctl delete resource ora.HSPRF

OR

crsctl delete serverpool ora.HSPRF -f



[oracle@rac1b bin]$ /u01/app/11.2.0/grid/bin/crsctl delete serverpool HSPRF

CRS-2553: Server pool 'HSPRF' cannot be unregistered as it does not exist

CRS-4000: Command Delete failed, or completed with errors.

[oracle@rac1b bin]$



Fix –



#upon checking the CRS Resources I found another DB with uca3perf name and I deleted it.

[grid@rac1a ~]$ crs_stat | grep db

NAME=ora.ccdb.db

NAME=ora.ctlmnp.db

NAME=ora.gctest.db

NAME=ora.siprd.db

NAME=ora.test.db

NAME=ora.HSprf.db

[grid@rac1a ~]$ crsctl delete resource ora.HSprf.db

[grid@rac1a ~]$  crsctl delete serverpool ora.HSPRF


# Once done, the server pool got deleted and I was able to add the database back.