Monday, March 31, 2014



RAC Primary to RAC Standby Configuration


Recently we need to setup the Physical Standby DB for one of our RAC cluster. The setup also needed the RAC on standby site. So I decided to put up a note on the same topic.

Primary Site SCAN - rac2.xxx.com
Primary DB Service- TROY
Primary Site Nodes - rac2e and rac2f


Standby Site SCAN - devrac2.xxx.com
Standby DB Service- HOFFMAN
Standby site Nodes - devrac2, devrac3


DB Name - DGTEST
Instance Names - DGTEST1, DGTEST2.

First one needs to prepare the primary database for stadnby configuration. There are several parameters that one needs to configure and this does not require any downtime. 

On Primary -

alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(TROY,HOFFMAN)' scope=both sid='*';

alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=troy' scope=both sid='*';

alter system set LOG_ARCHIVE_DEST_2='SERVICE=hoffman ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=hoffman' scope=both sid='*';

alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';

alter system set log_archive_max_processes=8 scope=both sid='*';

alter system set fal_server=hoffman scope=both sid='*';

alter system set db_file_name_convert='troy','hoffman' scope=spfile sid='*';

alter system set log_file_name_convert='troy','hoffman' scope=spfile sid='*';

alter system set standby_file_management=AUTO scope=both sid='*';

Post changes the file parameter file will look like following.
===========================================
 *.audit_file_dest='/u01/app/oracle/admin/troy/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+REPORTING_DATA/troy/controlfile/current.279.839829147','+APP_DATA/troy/controlfile/current.13875.839829151'
*.db_block_size=8192
*.db_create_file_dest='+REPORTING_DATA'
*.db_domain=''
*.db_file_name_convert='+DATA/hoffman','+REPORTING_DATA/troy'
*.db_name='DGTEST'
*.db_recovery_file_dest='+APP_DATA'
*.db_recovery_file_dest_size=4558159872
*.db_unique_name='troy'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DGTESTXDB)'
*.fal_server='HOFFMAN'
DGTEST1.instance_number=1
DGTEST2.instance_number=2
*.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=151.149.123.248)(PORT=1525))))'
*.log_archive_config='DG_CONFIG=(troy,hoffman)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=troy'
*.log_archive_dest_2='SERVICE=hoffman ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=hoffman'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.log_file_name_convert='+DATA/hoffman','+APP_DATA/troy'
*.pga_aggregate_target=393216000
*.processes=150
*.remote_listener='rac2.xxx.com:1525'
*.remote_login_passwordfile='exclusive'
*.sga_target=1179648000
*.standby_file_management='AUTO'
DGTEST2.thread=2
DGTEST1.thread=1
DGTEST2.undo_tablespace='UNDOTBS2'
DGTEST1.undo_tablespace='UNDOTBS1'
===========================================

Add tns entry for troy in tnsnames on both nodes of primary and standby site.

TROY =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2.xxx.com)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = troy)
    )
  )
TROY1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2e-vip)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = troy1)
    )
  )

TROY2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = rac2f-vip)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = troy2)
    )
  )

Next one needs to add Standby Redo Logs on Primary side (so the switch over becomes faster during role reversal)

Standy By log location - +app_data/troy/standbylog

alter database add standby logfile thread 1 group 5 '+app_data/troy/standbylog/standby_group_05.log' size 52M;
alter database add standby logfile thread 1 group 6 '+app_data/troy/standbylog/standby_group_06.log' size 52M;
alter database add standby logfile thread 1 group 7 '+app_data/troy/standbylog/standby_group_07.log' size 52M;
alter database add standby logfile thread 2 group 8 '+app_data/troy/standbylog/standby_group_08.log' size 52M;
alter database add standby logfile thread 2 group 9 '+app_data/troy/standbylog/standby_group_09.log' size 52M;
alter database add standby logfile thread 2 group 10 '+app_data/troy/standbylog/standby_group_10.log' size 52M;


After this one needs to prepare the standby parameter file. The best way to do this is to copy primary parameter file and then modify it accordingly your requirement. 
One thing is that here I am also converting the the source to destination disk group. 

Primary DB - 
--db_create_file_dest   -  +REPORTING_DATA
--db_recovery_file_dest -  +APP_DATA

Stnadby DB - 
--db_create_file_dest  -  +DATA
--db_recovery_file_dest-  +DATA

STANDBY PARAMETER FILE -

*.audit_file_dest='/u01/app/oracle/admin/hoffman/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+DATA/hoffman/controlfile/control01.ctl','+FRA/hoffman/controlfile/control02.ctl'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_file_name_convert='+REPORTING_DATA/troy','+DATA/hoffman'
*.db_name='DGTEST'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=4558159872
*.db_unique_name='hoffman'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DGTESTXDB)'
*.fal_server='troy'
DGTEST1.instance_number=1
DGTEST2.instance_number=2
*.log_archive_config='DG_CONFIG=(troy,hoffman)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=hoffman'
*.log_archive_dest_2='SERVICE=troy ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=troy'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.log_file_name_convert='+APP_DATA/troy','+DATA/hoffman'
*.pga_aggregate_target=393216000
*.processes=150
*.remote_listener='devrac.xxxx.com:1521'
*.remote_login_passwordfile='exclusive'
*.sga_target=1179648000
*.standby_file_management='AUTO'
DGTEST2.thread=2
DGTEST1.thread=1
DGTEST2.undo_tablespace='UNDOTBS2'
DGTEST1.undo_tablespace='UNDOTBS1'
local_listener='(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ps1.xxx.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = hoffman1)))'


Start the standby instance in no mount mode using the above parameter file.
SQL> startup nomount pfile='initstby.ora';

Next create spfile on disk group from pfile 
SQL> create spfile='+data/hoffman/parameterfile' from pfile;

The above command will add following entry to parameter file. 
SPFILE='+DATA/HOFFMAN/PARAMETERFILE/spfile.282.840082373'               # line added by Agent

check the value of the spfile to make sure it reflects the correct value otherwise during next restart your DB will not be started using spfile.

Now one has to insert entries in tnsnames.ora file on standby host for standby instances, as well as primary instance. 
Following are the standby instances entries that needed to go on all standby hosts as well as Primary hosts.

HOFFMAN =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = devrac.xxx.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = hoffman)
    )
  )

HOFFMAN1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ps1-vip.xxx.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = hoffman1)
    )
  )

HOFFMAN2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ps2-vip.xxx.com)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = hoffman2)
    )
  )

Now one has to copy the primary database to instantiate standby. There are various ways, more frequently used, as follows.

1. Use RMAN Backup to instantiate standby (need RMAN backup to be shipped to standby host)

rman target / nocatalog
run
{
     sql "alter system switch logfile";
     allocate channel ch1 type disk format '/home/oracle/backup/Primary_bkp_for_stndby_%U';
     backup database;
     backup current controlfile for standby;
     sql "alter system archive log current";
}

2. Use active database duplication

run {

duplicate target database for standby from active database;

}

Once the database is restored and recovered as needed, one needs to start the MRP process on standby database. Make sure you start MRP only on one node...

SQL> alter database recover managed standby database disconnect from session;

Check the status of the process on standby.

SQL> select inst_id, process, status ,  THREAD#, SEQUENCE# from gv$managed_standby;

On Primary - 
To check if there is any error during log shipping.

SQL> select INST_ID,DEST_NAME,STATUS,LOG_SEQUENCE ,PROCESS , ERROR from gv$archive_dest where dest_name in ('LOG_ARCHIVE_DEST_1','LOG_ARCHIVE_DEST_2')

On StandBy- 
SQL> select INST_ID,NAME,THREAD#,SEQUENCE#,STATUS,ARCHIVED,APPLIED  from gv$archived_log order by inst_id, THREAD#,SEQUENCE#

SQL> select database_role, open_mode, SWITCHOVER_STATUS from gv$database;


Register standby with OCR
======================
This needs to be on only one of the nodes. 

[oracle@devrac2 dbs]$ srvctl add database -c RAC -d hoffman -n DGTEST -o /u01/app/oracle/product/11.2.0.3/db_1 -p +DATA/HOFFMAN/PARAMETERFILE/spfile.282.840082373 -r physical_standby -a 
DATA,FRA -s mount

[oracle@devrac2 dbs]$srvctl add instance -d hoffman -i DGTEST1 -n devrac2
[oracle@devrac2 dbs]$srvctl add instance -d hoffman -i DGTEST2 -n devrac3

[oracle@devrac2 dbs]$srvctl start database -d hoffman

Once added, stop and restart the database with srvctl -

[oracle@devrac2 dbs]$ srvctl  status  database  -d hoffman
Instance DGTEST1 is not running on node devrac2
Instance DGTEST2 is not running on node devrac3
[oracle@devrac2 dbs]$ srvctl  start  database  -d hoffman
[oracle@devrac2 dbs]$ srvctl  status  database  -d hoffman
Instance DGTEST1 is running on node devrac2
Instance DGTEST2 is running on node devrac3

Now you have running your standby database on both nodes (in mount mode) and MRP running on any one of the node of your choice.