MIGRATING ORACLE STREAMS TO ORACLE GOLDENGATE

Steps:

  • Install Oracle GoldenGate software on source and target system
  • Create the Oracle GoldenGate Manager process on source and target system
  • Configure supplemental logging
  • Create and start the Oracle GoldenGate Extract at the source system
  • Create the Oracle GoldenGate Replicat process at the target system
  • Keep the Oracle GoldenGate Replicat process stopped until the switchover

There are multiple ways to perform the transition from Oracle Streams to Oracle GoldenGate. This information looks at one method of performing the conversion, which consists of the following steps

  • Check for open transactions and Oracle Streams errors.
  • Stop Oracle Streams Apply process at target system
  • Find last applied SCN from Oracle Streams Replication at the target system
  • Start Oracle Golden Gate Replicat process using the AFTERCSN command
  • Stop Oracle Streams Capture process and Oracle Streams Propagation at source

The final step in the migration process consists of deconfiguring your Oracle Streams environment, as follows:

· Remove Oracle Streams Configuration at source and target

Install Oracle GoldenGate Software

Before beginning the migration, you must first install and configure the Oracle GoldenGate software at the source and target locations.

Create Oracle GoldenGate Manager Process

The Oracle GoldenGate Manager process is the main process under which the Extract and Replicat processes exist. The Manager process itself must contain at least a port that is used for the communication. For maintenance purposes, it is useful to include additional parameters to address restart, report or purge options

— Base configuration of Mgr — Use PORT to specify a TCP/IP port number for the Manager process on which to
— interact with remote processes requesting dynamic services, typically either an
— initial-load Replicat or the Collector process.
PORT 7809
— Use LAGREPORTMINUTES and LAGREPORTHOURS to specify the interval at which
— Manager checks for Extract and Replicat lag.
LAGREPORTMINUTES
 
— Use LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS to specify a lag
— threshold that is considered critical and to force a warning message to the error LAGCRITICALMINUTES 5

 

Configure Supplemental Logging and Language Settings

Both Oracle Streams Capture and Oracle GoldenGate Extract processes require supplemental logging of redo information to insure they have sufficient information to support chained rows and various storage arrangements. In order to switch over to GoldenGate we will need to use the ADD SCHEMATRANDATA command to make sure that GoldenGate has the correct supplemental logging in place for GoldenGate. Due to differences between the way Streams and GoldenGate adds supplemental logging it is a requirement that the ADD SCHEMATRANDATA command be used in adding supplemental logging.

To insure appropriate supplemental logging in your GoldenGate environment, you must issue the following command using GGSCI:

GGSCI> ADD SCHEMATRANDATA siebel

 

This will add the appropriate logging for all tables in your schema. If you are replicating only a small number of tables, you may prefer to add logging for each table individually using the command shown below

GGSCI> ADD TRANDATA Siebel.s_org_ext

 

Because GoldenGate supports more data types than Streams, you should check your current Streams environment for any unsupported data types and columns. Streams may automatically skip unsupported data types. If you have tables with data types that are unsupported in Streams, but are supported in GoldenGate, you will need to manually re-instantiate the tables on the target

To ensure accurate character representation from one database to another, the following must be true:

The character set of the target database must be a superset of the character set of the source database. If the client applications use different character sets, the database character set must be a superset of the character sets of the client applications. In this configuration, every character is represented when converting from a client character set to the database character set.

use the following query to set the value of NLS_LANG on both the source and target system in the GoldenGate parameter files

select ‘SETENV (NLS_LANG= “‘||LANG.value||’_’||TERR.value||’.’||CHAR1.value||'”)'”O.S. Setting”
FROM NLS_DATABASE_PARAMETERS LANG,
NLS_DATABASE_PARAMETERS TERR,
NLS_DATABASE_PARAMETERS CHAR1
where LANG.parameter = ‘NLS_LANGUAGE’
and TERR.parameter = ‘NLS_TERRITORY’
and CHAR1.parameter = ‘NLS_CHARACTERSET’;

 

To avoid missing any data during the conversion it is a best practice to change the COMMIT_SERIALIZATION to FULL. This will make sure that when Streams is stopped you will have a consistent point to start the GoldenGate Replicat process. When you perform the conversion, you should run the Streams configuration with the PARALLELISM of the Apply process set to 1 and/or the COMMIT_SERIALIZATION set to FULL

SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters2 WHERE apply_name = ‘STREAM_DEMO_APPLY’

3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);

APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION DEPENDENT_TRANSACTIONS
STREAM_DEMO_APPLY PARALLELISM 4
SQL>
SQL>
SQL> BEGIN
2 DBMS_APPLY_ADM.SET_PARAMETER
3 (apply_name => ‘STREAM_DEMO_APPLY’
4 ,parameter => ‘COMMIT_SERIALIZATION’
5 ,value => ‘FULL’
6 );
7 END;
8 /
PL/SQL procedure successfully completed.
SQL>
SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters
2 WHERE apply_name = ‘STREAM_DEMO_APPLY’
3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);
APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION FUL

 

Create and Start the Oracle GoldenGate Extract Process at the Source System

 The Oracle GoldenGate Extract process parameter file keeps information about process specific parameters (process name, database connectivity, discard file, trail files) and object specific parameters.

EXTRACT JOHN_EXT– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = <“UTF8”>)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_EXT.dsc, APPEND
— Use EXTTRAIL to specify a trail on the local system. The implementation of this
— parameter varies slightly, depending on the process.
EXTTRAIL ./dirdat/aa
— Use REPORTCOUNT to generate a count of records that have been processed since
— the Extract or Replicat process started
REPORTCOUNT EVERY <n> HOURS, RATE
— Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetches
FETCHOPTIONS, USESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT
–Use the STATOPTIONS parameter to specify information to be included in statistical
— displays generated by the STATS EXTRACT or STATS REPLICAT command
STATOPTIONS REPORTFETCH
— Use WARNRATE to set the rate at which SQL errors encountered during replication
— are reported. WARNRATE issues a warning after the specified number of errors
— generated on a particular target table. If many errors are expected but tolerable,
— WARNRATE helps minimize the size of the report file and error log.
WARNLONGTRANS 1H, CHECKINTERVAL 15m
— The implementation of this parameter varies depending on the process.
TABLE SIEBEL.*;

 

Data Pump process 

The configuration of Oracle GoldenGate Data Pump corresponds to Oracle Streams propagation. To use PASSTHRU mode, the names of the source and target objects must be identical. In PASSTHRU mode, the Extract process does not look up table definitions, either from the database or from a data definitions file.

extract JOHN_PMP– Extract parameter PASSTHRU enables a data-pump Extract to bypass table
— definitions lookups
PASSTHRU
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
discardfile ./dirrpt/JOHN_PMP.dsc, append
— Use RMTHOST to identify a remote system and the TCP/IP port number on that
— system where the Manager process is running.
rmthost dnvdev1, mgrport 7809
— Use RMTTRAIL to specify a remote trail that was created with the ADD RMTTRAIL
— command in GGSCI.
rmttrail ./dirdat/AA
— The implementation of this parameter varies depending on the process.
Table siebel.*;

 

 

Adding and Starting Extract and Data Pump

You can now start the Extract and Data Pump processes by using the default parameters that are written in the ./dirprm directory

— Start Extract (capture)GGSCI> ADD EXTRACT JOHN_EXT, TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL ./dirdat/AA, EXTRACT JOHN_EXT, MEGABYTES 500
— start extract (pump)
GGSCI> ADD EXTRACT JOHN_PMP, BEGIN NOW, EXTTRAILSOURCE ./dirdat/AA
GGSCI> ADD RMTTRAIL ./dirdat/AA, EXTRACT JOHN_PMP, MEGABYTES 500
–check Extract status
GGSCI> START EXTRACT JOHN_EXT
GGSCI> START EXTRACT JOHN_PMP
GGSCI> INFO EXTRACT JOHN_EXT
GGSCI> INFO EXTRACT JOHN_PMP

You can check that the Oracle GoldenGate Extract process is writing the committed transactions to the trail file and that this information is being propagated to the remote trail files by using the STATS command from the GGSCI interface. Alternatively, from within the LOGDUMP interface, you can use the COUNT command:

— At source system:GGSCI> STATS EXTRACT EXT_01
LOGDUMP> COUNT ./dirdat/AA000000
— At target system:
LOGDUMP> COUNT ./dirdat/AA000000

 

Create the Oracle GoldenGate Replicat Process at the Target System

The structure of the Oracle GoldenGate Replicat process is similar to the structure of the Extract process. In the configuration file for the replicat process, the conditions of the tables being replicated are given by the MAP .. TARGET pattern. You should derive these entries from the DBA_STREAMS_RULES view for the streams_type APPLY.

REPLICAT JOHN_REP– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = “UTF8”)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_REP.dsc, APPEND
— Use ASSUMETARGETDEFS when the source and target tables specified with a MAP
— statement have the same column structure, such as when synchronizing a hot
— site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.
ASSUMETARGETDEFS
— Use MAP to establish a relationship between one or more source and target
— tables
MAP siebel.*, TARGET siebel.*;

 

create the Oracle GoldenGate replicat process, but you should not start it, yet

GGSCI> ADD REPLICAT JOHN_REP exttrail ./dirdat/AAGGSCI> INFO REPLICAT JOHN_REP

 

Transition from Oracle Streams to Oracle GoldenGate 

If have successfully installed and configured Oracle GoldenGate at both the source and target sites, then it’s ready to begin transitioning from Oracle Streams to Oracle GoldenGate

Check for Open Transaction

Before switch over from Streams to GoldenGate you need to make sure that all open transactions are captured by the GoldenGate Extract process. The best way to do this is to check the ggserr.log file for the timestamp of when the Extract process was added

SELECT vs.username, vs.program,vs.machine,vt.xidusn, vt.xidslot,vt.xidsqn
,vt.start_time, TO_CHAR(SYSDATE,’YYYY-MM-DD HH24:MI:SS’)
FROM gv$transaction vt, gv$session vs
WHERE vt.addr = vs.taddr
ORDER BY start_time;

 

The following query for the Streams Apply process on the target will display the source transaction IDs that are being applied.

selectstreams_name,
STREAMS_TYPE,
to_char(FIRST_MESSAGE_TIME,’YYYY-MM-DD HH24:MI:SS’) Start_Time,
xidusn,
xidslt,
xidsqn,
transaction_ID
from V$STREAMS_TRANSACTION where STREAMS_TYPE = ‘APPLY’;

 

Before you stop the Streams Apply process you need to make sure all of the transactions are covered by either the Streams or GoldenGate process. Check the V$TRANSACTION, V$STREAMS_APPLY_COORDINATOR and V$STREAMS_TRANACTION views for transaction start times.

Check on the apply side for the last transaction applied

SQL>SELECT LWM_MESSAGE_NUMBER,to_char(LWM_MESSAGE_CREATE_TIME,’YYYY-MM-DD HH24:MI:SS’) Message_Time from V$STREAMS_APPLY_COORDINATOR;

 

Stops the Oracle Streams Apply Process at the Target System

At the target side, you should cleanly stop the Streams Apply process using the STOP_APPLY procedure in the DBMS_APPLY_ADM package. By default, the process will stop once the current transaction has completed.

SQL> BEGINDBMS_APPLY_ADM.STOP_APPLY(‘STREAM_DEMO_APPLY’);
END;
/

SQL> SELECT APPLY_NAME, STATUS, MAX_APPLIED_MESSAGE_NUMBER FROM dba_apply WHERE apply_name = ‘STREAM_DEMO_APPLY’;

 

You should check that there are no issues with the Streams Apply process. So you should also check that there are no Streams Apply errors in the DBA_APPLY_ERROR view:

SQL> SELECT COUNT(*) FROM dba_apply_error WHERE apply_name = ‘STREAM_DEMO_APPLY’; 
COUNT(*)
———-
0

 

Find the Last Applied System Change Number

If the Streams Apply process is stopped and there are no errors, you need to find the last applied change (SCN) from the replication system by querying the DBA_APPLY_PROGRESS view at the target system:

SQL> SELECT APPLIED_MESSAGE_NUMBER  FROM dba_apply_progressWHERE apply_name = ‘STREAM_DEMO_APPLY’;
 
APPLIED_MESSAGE_NUMBER
———————-
2234450777(This going to be a AFTERCSN for Goldengate replicat process)

 

Start the Oracle GoldenGate Replicat Process at the Target System

GGSCI> START REPLICAT JOHN_REP AFTERCSN 2234450777 GGSCI> INFO REPLICAT JOHN_REP
GGSCI> STATS REPLICAT JOHN_REP

 

Stop the Oracle Streams Capture Process at the Source System

SQL> BEGINDBMS_CAPTURE_ADM.STOP_CAPTURE(‘JOHN_CAP’);
END;
SQL> SELECT capture_name,status FROM dba_capture
WHERE capture_name = ‘JOHN_CAP’
SQL> BEGIN
DBMS_PROPAGATION_ADM.STOP_PROPAGATION(‘JOHN_PROB’);
END;
/
SQL> SELECT propagation_name,status FROM dba_propagation
WHERE propagation_name = ‘JOHN_PROB’;

 

Deconfiguring Oracle Streams

At this time, the Oracle GoldenGate processes are running and are responsible for replicating information from the source to the target system. The Streams processes are stopped only. It is considered a best practice to remove only the processes that you are replacing with GoldenGate processes as outlined below. At the very end, you might also remove the Streams administrator user.

On the Source System:

SQL> BEGIN  dbms_capture_adm.drop_capture(‘JOHN_CAP’,drop_unused_rule_sets=>true);END;
/

SQL> BEGIN  dbms_propagation_adm.drop_propagation(‘JOHN_PROB’,drop_unused_rule_sets=>true);
END;
/

SQL> BEGIN  dbms_streams_adm.remove_queue (‘STREAMS_QUEUE_CP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER STRMADMIN CASCADE;

 

On the Target System: 

SQL> BEGIN dbms_apply_adm.delete_all_errors(‘STREAM_DEMO_APPLY’ );END;
/
SQL> BEGIN dbms_apply_adm.drop_apply(‘STREAM_DEMO_APPLY’ ,drop_unused_rule_sets=>true);
END;
/

SQL>BEGIN dbms_streams_adm.remove_queue(‘STREAMS_QUEUE_AP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER johnstream CASCADE;


We got some insider information from Oracle Product managers on possible features for the next generation of Exadata X4 systems that will be released soon hopefully later in 2013 or 2014. Please note this information may change on actual release from Oracle.

  • Oracle X4-2 and X4-8 if Oracle keeps the same name
  • Will now support Oracle Virtual Machine – OVM.
  • X4-8(4 CPUS only due to NUMA constraints) and X4-2(2 CPUS)
  • 10 to 12 cores per CPU still not confirmed
  • Up to 1TB of RAM
  • Oracle In-Memory DB option for 12c will run on Exadata X4

 


Oracle’s Flagship Product The SuperCluster M6-32

 

Another update from Oracle Open World 2013, Oracle has announced their Flagship Product the Supercluster M6-32 this latest version is their fastest in memory system yet on the Sparc chipset. The hardware specs are very impressive and you can combine both Database and Applications all together on one box, perfect for consolidation.

oracle_supercluster

 
  • Oracle’s fastest and most scalable engineered system
  • SuperCluster on Sparc is the fastest chipset on the market – beat IBM P Series
  • Ideal for running mission-critical database and applications in memory and consolidating the largest workloads
  • With up to 32 TB of Memory
  • 32 Processors, 12 cores/CPU, 384 cores total
  • Can scale out and add more storage and servers
  • OLTP, Datawarehousing
  • Complex applications
  • Supports In-memory databases
  • Applications and Databases can be run simultaneously together
  • Will Support Solaris Containers or Zoning

 


				

Book Title:
Successfully Virtualize Business Critical Oracle Databases

VMware iBook Cover

Here’s the book Description:
Written by VMware vExperts (Charles Kim (VCP), Viscosity North America, and George Trujillo (VCI), HortonWorks) and leading experts within VMware VCI and Learning Specialist (Steve Jones) and Chief Database Architect in EMC’s Global IT organization (Darryl Smith), this book will provide critical instructions for deploying Oracle Standalone and Real Application Cluster (RAC) on VMware enterprise virtualized platforms. You will learn how to setup an Oracle ecosystem within a virtualized infrastructure for enterprise deployments. We will share industry best practices to install and configure Linux, and to deploy Oracle Grid Infrastructure and Databases in a matter of hours. Whether you are deploying a 4 node RAC cluster or deploying a single standalone database, we will lead you to the foundation which will allow you to rapidly provision database-as-a-service. We will disseminate key details on creating golden image templates from the virtual machine to the Oracle binaries and databases. You will learn from industry experts how to troubleshoot production Oracle database servers running in VMware virtual infrastructures.

Audience:
Database Admins/Architects, VMware Admins, System Admins/Architects, Project Architects
This book designed for Oracle DBAs and VMware administrators needing to learn the art of virtualizing Oracle.


Some of you may old enough to recall the song “Secret Agent Man” from Johnny Rivers:
There’s a man who leads a life of danger.
To everyone he meets he stays a stranger.
With every move he makes another chance he takes.
Odds are he won’t live to see tomorrow.

Well that’s how I felt when I was at a customer site recently (well maybe not exactly).

They recently had a issue with a node eviction. That in itself deserves a blog post later.
But anyways, he was asking “what are all these Clusterware processes and how do you even traverse through all the log files”.
After 15 mins of discussion, I realized I had thoroughly confused him.
So I suggested we start from the beginning and firstly try to understand Oracle Clusterware processes, agents, and relationships, then draw up some pictures. Maybe then we’ll have a better feel for hierarchy.

Let’s start with the grand master himself HAS (or OHASD)

OHASD manages clusterware daemons, including CRSD. We’ll discuss CRSD resources and startp in another blog. For now just keep in mind that OHASD starts up CRSD (at some point later in the stack), once CRSD is started, it manages the remaining startup of the stack

The “-init flag” is needed for crsctl to operate on OHASD resources,e.g. crsctl stat res ora.crsd -init
To list resources started by CRSD you would issue just “crsctl stat res”

OHASD resource startup order
ora.gipcd
ora.gpnpd -> Starts ora.mdnsd because of dependency
ora.cssd -> Starts ora.diskmon and ora.cssdmonitor because of dependency
ora.ctssd
ora.evmd
ora.crsd

OHASD has agents that work for him. These agents are oraagent, orarootagent, cssdagent and cssdmonitoragent. Each agent manages and handles very specific OHASD resources, and each agent runs as a specific user (root or, clusterware user).
For example, the ora.cssd resource (as root user) is started and monitored by the ora.cssdagent, whereas ora.asm is handled by the oraagent (running as cluster ware user).

All agent as well as other OHASD resource log files are in the CRS $ORACLE_HOME/log/hostname/agent/{ohasd|crsd}/agentname_owner/agentname_owner.log or in CRS $ORACLE_HOME/log/hostname/resource_name/resource_name.log; respectively.

To find out which agent is associated with a resource issue the following:

[root@rhel59a log]# crsctl stat res ora.cssd -init -p |grep “AGENT_FILENAME”
AGENT_FILENAME=%CRS_HOME%/bin/cssdagent%CRS_EXE_SUFFIX%

For example, for CRSD we find:

[root@rhel59a bin]# crsctl stat res ora.crsd -init -p |grep “AGENT_FILENAME”
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%

Note, an agent log file can have log messages for more than one resources, since those resources are managed by the same agent.

When I debug a resource, I start by going down the following Clusterware log file tree:
1. Start with Clusterware alert.log

2. Depending on the resource (managed by OHASD or CRSD) I look $ORACLE_HOME/logs//ohasd/ohasd.log or $ORACLE_HOME/logs//crsd/crsd.log

3. Then agent log file, as I mentioned above

4. Then finally to the resources log file itself (that’ll be listed in the agent log)

Item #2 requires a little more discussion, and will be the topic of our next discussion