11g to 12c Upgrade/Migration Using Full Transportable TableSpace & Data Pump
With the Advent of 12c and Oracle trying to convincing the people to migrate to 12c, we decided to test one of our OEM repository DB running on 11.2.0.3 to 12c. The approach we choose is to use TTS + Data Pump feature to migrate the DB to newer version.
When using conventional methods to move data, Oracle Data Pump uses either external tables or direct path unload to extract data. While the choice between these two access methods is based on the structure and types of the data being unloaded, both methods efficiently extract logical subsets of data from an Oracle database.
In contrast, a transportable move of data and indexes involves the physical movement of one or more tablespace data files. The data segments inside the tablespace data files are not read individually. Instead, the export operation extracts the metadata that describes the objects containing storage within each data file, and each file is moved as a single entity.
Moving large volumes of data using transportable tablespaces can be faster than conventional data movement because there is no need to interpret and extract individual rows of data or index entries. It is possible to move individual tables or partitions in a transportable manner, but the entire tablespace data file is moved in these cases as well. Understanding the difference between conventional and transportable data movement is helpful when considering the distinction between administrative and user tablespaces.
For the purposes of a full transportable export, administrative tablespaces are the tablespaces provided by Oracle, such as SYSTEM, SYSAUX, TEMP, and UNDO. These tablespaces contain the procedures, packages, and seed data for the core Oracle database functionality and Oracle-provided database components such as Oracle Spatial, Oracle Text, OLAP, JAVAVM, and XML Database.
In contrast, user tablespaces are those tablespaces defined by database users or applications. These may store user data, application data, and any other information defined by users of the database.
One consideration specific to full transportable export/import arises when there exists a database object (e.g., a partitioned table) that is stored across both user and administrative tablespaces. Storing an object in this way is generally not good practice, but it is possible. If there is an object with storage in both administrative and user tablespaces, then you can either redefine that object before transporting your data, or use conventional Data Pump export/import. The example later in this white paper shows how to detect this condition prior to starting a full transportable export.
As a first step, one needs to identify the table-spaces needed for the application data and make sure that objects belonging to application schema are self contained. If they are not then one needs to first migrate them to single table space for easier management, however that's not mandatory.
1. Identify the table spaces needed for applications.
SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
MGMT_ECM_DEPOT_TS ONLINE
MGMT_TABLESPACE ONLINE
MGMT_AD4J_TS ONLINE
To determine whether our table spaces are self-contained, including verification that
referential integrity constraints will be valid after the transport, execute the following command on the source database.
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('MGMT_ECM_DEPOT_TS,MGMT_TABLESPACE,MGMT_AD4J_TS', TRUE);
PL/SQL procedure successfully completed.
Note that you must include all user tablespaces in the database when performing this check for a full transportable export/import.
After invoking this PL/SQL procedure, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
If the set of tablespaces is self-contained, this view will be empty. If any violations are listed, then you must address these issues before proceeding with the full transportable operation.
2. Put TBS in read only mode..
SQL> alter tablespace MGMT_ECM_DEPOT_TS read only;
Tablespace altered.
SQL> alter tablespace MGMT_TABLESPACE read only;
Tablespace altered.
SQL> alter tablespace MGMT_AD4J_TS read only;
Tablespace altered.
3. Export the metadata of the needed table spaces.
expdp parfile=expdp.par
# vi expdp.par
full=Y
transportable=always
version=12
directory=DATA_PUMP_DIR
dumpfile=oemtestdb.dmp
metrics=y
exclude=statistics
#encryption_password=secret123word456
logfile=full_tts_export.log
EXCLUDE=TABLESPACE:"IN ( \'TABLESPACE:USERS\', \'TABLESPACE:UNDOTBS1\', \'TABLESPACE:TEMP\', \'TABLESPACE:EXAMPLE\') "
Final output -
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/log/gpdb.dmp
******************************************************************************
Datafiles required for transportable tablespace MGMT_AD4J_TS:
+RAC_DATA/gpdb/datafile/mgmt_ad4j_ts.339.843266579
Datafiles required for transportable tablespace MGMT_ECM_DEPOT_TS:
+RAC_DATA/gpdb/datafile/mgmt_ecm_depot_ts.341.843266557
Datafiles required for transportable tablespace MGMT_TABLESPACE:
+RAC_DATA/gpdb/datafile/mgmt_tablespace.340.843266559
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 10:03:27
4. Now if you are using ASM to store the files, then connect to ASM instance using asmcmd and copy the files on target server as follows
ASMCMD [+rac_data/gpdb/datafile] > ls -ltr
WARNING:option 'r' is deprecated for 'ls'
please use 'reverse'
Type Redund Striped Time Sys Name
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y EXAMPLE.1618.843199351
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y MGMT_ECM_DEPOT_TS.341.843266557
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y MGMT_TABLESPACE.340.843266559
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y SYSAUX.500.843199155
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y UNDOTBS1.338.843265261
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y UNDOTBS1.499.843199157
DATAFILE UNPROT COARSE MAR 27 04:00:00 Y USERS.1617.843199353
DATAFILE UNPROT COARSE MAR 27 09:00:00 Y MGMT_AD4J_TS.339.843266579
DATAFILE UNPROT COARSE MAR 27 10:00:00 Y SYSTEM.1624.843199155
ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_ad4j_ts.339.843266579 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_ad4j_ts.339.843266579 -> /u02/tmp/mgmt_ad4j_ts.339.843266579
ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_ecm_depot_ts.341.843266557 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_ecm_depot_ts.341.843266557 -> /u02/tmp/mgmt_ecm_depot_ts.341.843266557
ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_tablespace.340.843266559 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_tablespace.340.843266559 -> /u02/tmp/mgmt_tablespace.340.843266559
COPY THIS DATAFILES ON TARGET SERVER AND CHANGE PERMISSIONS ACCORDINGLY ( FOR THIS CASE I HAVE CONFIGURED THE ACFS MOUNT ACROSS THE CLUSTER SO I DONT REALLY NEED TO COPY THEM TO TARGET HOST, BUT ONLY TO TARGET LOCATION)
5. copy the dump file to DATA_PUMP_DIR location on target server..
DATA_PUMP_DIR - /u01/app/oracle/admin/gpdbupg/dpdump/
NOW ON 12C DATABASE IMPORT THE DUMP
impdp full=Y directory=DATA_PUMP_DIR dumpfile=gpdb.dmp logfile=impdpgpdbupg.log transport_datafiles='/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_ecm_depot_ts.341.843266557','/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_ad4j_ts.339.843266579','/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_tablespace.340.843266559'
OR
# impdp parfile=impdp.par
# vi impdp.par
FULL=Y
DIRECTORY=DATA_PUMP_DIR#ENCRYPTION_PASSWORD=<enc_passwd>
DUMPFILE=oemtestdb.dmp
LOGFILE=src112fullimp.log
#############################
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_ECM_DEPOT_TS.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_TABLESPACE.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_AD4J_TS.DBF'#############################
You just finished your 11g to 12c Migration successfully!! All you need to do now is to open the allow the users to connect to this DB and start rolling!!
There is an another approach to deploy full transportable export/import which employs the network mode of Data Pump, to eliminate the need for a dumpfile. In this case, we will assume that the tablespace data files are in a location, such as a Storage Area Network (SAN) device, which is accessible to both the source and destination systems. This enables you to migrate from a non-CDB into a PDB with one Data Pump command:
1. Create a CDB on the destination system, including a PDB into which you will import the source database.
2. Create a database link in the destination PDB, pointing to the source database.
3. Set user tablespaces in the source database to READ ONLY. Copy the data files for tablespaces to target CDB/PDB.
4. In the Oracle Database 12c environment, import directly from the source database into the destination PDB using full transportable export/import in network mode:
SQL> alter session set container=upgdb;
Session altered.
SQL> sho user
USER is "SYS"
SQL> CREATE DATABASE LINK src112 CONNECT TO system IDENTIFIED BY oracle USING 'TEST';
Database link created.
SQL> select * from sys.link_test@src112;
N
----------
10
# Create tablespaces on target database for application data.
SQL> show con_name
CON_NAME
------------------------------
UPGDB
#impdp sys/oracle@upgdb parfile=upg.par
#vi upg.par
NETWORK_LINK=src112
VERSION=12
FULL=Y
TRANSPORTABLE=ALWAYS
EXCLUDE=STATISTICS
#ENCRYPTION_PASSWORD=<enc_passwd>
METRICS=Y
LOGFILE=tgtpdb_dir:src112fullimp.log
#############################
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_ECM_DEPOT_TS.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_TABLESPACE.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_AD4J_TS.DBF'
#############################
Whether you use conventional dump files or network mode, full transportable export/import is a convenient way to upgrade a database to a new version, migrate to a different operating system or hardware platform, migrate into a PDB – or even to perform all three of these upgrades and migrations in a single operation!
No comments:
Post a Comment