MIGRATING ORACLE STREAMS TO ORACLE GOLDENGATE

Steps:

  • Install Oracle GoldenGate software on source and target system
  • Create the Oracle GoldenGate Manager process on source and target system
  • Configure supplemental logging
  • Create and start the Oracle GoldenGate Extract at the source system
  • Create the Oracle GoldenGate Replicat process at the target system
  • Keep the Oracle GoldenGate Replicat process stopped until the switchover

There are multiple ways to perform the transition from Oracle Streams to Oracle GoldenGate. This information looks at one method of performing the conversion, which consists of the following steps

  • Check for open transactions and Oracle Streams errors.
  • Stop Oracle Streams Apply process at target system
  • Find last applied SCN from Oracle Streams Replication at the target system
  • Start Oracle Golden Gate Replicat process using the AFTERCSN command
  • Stop Oracle Streams Capture process and Oracle Streams Propagation at source

The final step in the migration process consists of deconfiguring your Oracle Streams environment, as follows:

· Remove Oracle Streams Configuration at source and target

Install Oracle GoldenGate Software

Before beginning the migration, you must first install and configure the Oracle GoldenGate software at the source and target locations.

Create Oracle GoldenGate Manager Process

The Oracle GoldenGate Manager process is the main process under which the Extract and Replicat processes exist. The Manager process itself must contain at least a port that is used for the communication. For maintenance purposes, it is useful to include additional parameters to address restart, report or purge options

— Base configuration of Mgr — Use PORT to specify a TCP/IP port number for the Manager process on which to
— interact with remote processes requesting dynamic services, typically either an
— initial-load Replicat or the Collector process.
PORT 7809
— Use LAGREPORTMINUTES and LAGREPORTHOURS to specify the interval at which
— Manager checks for Extract and Replicat lag.
LAGREPORTMINUTES
 
— Use LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS to specify a lag
— threshold that is considered critical and to force a warning message to the error LAGCRITICALMINUTES 5

 

Configure Supplemental Logging and Language Settings

Both Oracle Streams Capture and Oracle GoldenGate Extract processes require supplemental logging of redo information to insure they have sufficient information to support chained rows and various storage arrangements. In order to switch over to GoldenGate we will need to use the ADD SCHEMATRANDATA command to make sure that GoldenGate has the correct supplemental logging in place for GoldenGate. Due to differences between the way Streams and GoldenGate adds supplemental logging it is a requirement that the ADD SCHEMATRANDATA command be used in adding supplemental logging.

To insure appropriate supplemental logging in your GoldenGate environment, you must issue the following command using GGSCI:

GGSCI> ADD SCHEMATRANDATA siebel

 

This will add the appropriate logging for all tables in your schema. If you are replicating only a small number of tables, you may prefer to add logging for each table individually using the command shown below

GGSCI> ADD TRANDATA Siebel.s_org_ext

 

Because GoldenGate supports more data types than Streams, you should check your current Streams environment for any unsupported data types and columns. Streams may automatically skip unsupported data types. If you have tables with data types that are unsupported in Streams, but are supported in GoldenGate, you will need to manually re-instantiate the tables on the target

To ensure accurate character representation from one database to another, the following must be true:

The character set of the target database must be a superset of the character set of the source database. If the client applications use different character sets, the database character set must be a superset of the character sets of the client applications. In this configuration, every character is represented when converting from a client character set to the database character set.

use the following query to set the value of NLS_LANG on both the source and target system in the GoldenGate parameter files

select ‘SETENV (NLS_LANG= “‘||LANG.value||’_’||TERR.value||’.’||CHAR1.value||'”)'”O.S. Setting”
FROM NLS_DATABASE_PARAMETERS LANG,
NLS_DATABASE_PARAMETERS TERR,
NLS_DATABASE_PARAMETERS CHAR1
where LANG.parameter = ‘NLS_LANGUAGE’
and TERR.parameter = ‘NLS_TERRITORY’
and CHAR1.parameter = ‘NLS_CHARACTERSET’;

 

To avoid missing any data during the conversion it is a best practice to change the COMMIT_SERIALIZATION to FULL. This will make sure that when Streams is stopped you will have a consistent point to start the GoldenGate Replicat process. When you perform the conversion, you should run the Streams configuration with the PARALLELISM of the Apply process set to 1 and/or the COMMIT_SERIALIZATION set to FULL

SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters2 WHERE apply_name = ‘STREAM_DEMO_APPLY’

3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);

APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION DEPENDENT_TRANSACTIONS
STREAM_DEMO_APPLY PARALLELISM 4
SQL>
SQL>
SQL> BEGIN
2 DBMS_APPLY_ADM.SET_PARAMETER
3 (apply_name => ‘STREAM_DEMO_APPLY’
4 ,parameter => ‘COMMIT_SERIALIZATION’
5 ,value => ‘FULL’
6 );
7 END;
8 /
PL/SQL procedure successfully completed.
SQL>
SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters
2 WHERE apply_name = ‘STREAM_DEMO_APPLY’
3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);
APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION FUL

 

Create and Start the Oracle GoldenGate Extract Process at the Source System

 The Oracle GoldenGate Extract process parameter file keeps information about process specific parameters (process name, database connectivity, discard file, trail files) and object specific parameters.

EXTRACT JOHN_EXT– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = <“UTF8”>)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_EXT.dsc, APPEND
— Use EXTTRAIL to specify a trail on the local system. The implementation of this
— parameter varies slightly, depending on the process.
EXTTRAIL ./dirdat/aa
— Use REPORTCOUNT to generate a count of records that have been processed since
— the Extract or Replicat process started
REPORTCOUNT EVERY <n> HOURS, RATE
— Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetches
FETCHOPTIONS, USESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT
–Use the STATOPTIONS parameter to specify information to be included in statistical
— displays generated by the STATS EXTRACT or STATS REPLICAT command
STATOPTIONS REPORTFETCH
— Use WARNRATE to set the rate at which SQL errors encountered during replication
— are reported. WARNRATE issues a warning after the specified number of errors
— generated on a particular target table. If many errors are expected but tolerable,
— WARNRATE helps minimize the size of the report file and error log.
WARNLONGTRANS 1H, CHECKINTERVAL 15m
— The implementation of this parameter varies depending on the process.
TABLE SIEBEL.*;

 

Data Pump process 

The configuration of Oracle GoldenGate Data Pump corresponds to Oracle Streams propagation. To use PASSTHRU mode, the names of the source and target objects must be identical. In PASSTHRU mode, the Extract process does not look up table definitions, either from the database or from a data definitions file.

extract JOHN_PMP– Extract parameter PASSTHRU enables a data-pump Extract to bypass table
— definitions lookups
PASSTHRU
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
discardfile ./dirrpt/JOHN_PMP.dsc, append
— Use RMTHOST to identify a remote system and the TCP/IP port number on that
— system where the Manager process is running.
rmthost dnvdev1, mgrport 7809
— Use RMTTRAIL to specify a remote trail that was created with the ADD RMTTRAIL
— command in GGSCI.
rmttrail ./dirdat/AA
— The implementation of this parameter varies depending on the process.
Table siebel.*;

 

 

Adding and Starting Extract and Data Pump

You can now start the Extract and Data Pump processes by using the default parameters that are written in the ./dirprm directory

— Start Extract (capture)GGSCI> ADD EXTRACT JOHN_EXT, TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL ./dirdat/AA, EXTRACT JOHN_EXT, MEGABYTES 500
— start extract (pump)
GGSCI> ADD EXTRACT JOHN_PMP, BEGIN NOW, EXTTRAILSOURCE ./dirdat/AA
GGSCI> ADD RMTTRAIL ./dirdat/AA, EXTRACT JOHN_PMP, MEGABYTES 500
–check Extract status
GGSCI> START EXTRACT JOHN_EXT
GGSCI> START EXTRACT JOHN_PMP
GGSCI> INFO EXTRACT JOHN_EXT
GGSCI> INFO EXTRACT JOHN_PMP

You can check that the Oracle GoldenGate Extract process is writing the committed transactions to the trail file and that this information is being propagated to the remote trail files by using the STATS command from the GGSCI interface. Alternatively, from within the LOGDUMP interface, you can use the COUNT command:

— At source system:GGSCI> STATS EXTRACT EXT_01
LOGDUMP> COUNT ./dirdat/AA000000
— At target system:
LOGDUMP> COUNT ./dirdat/AA000000

 

Create the Oracle GoldenGate Replicat Process at the Target System

The structure of the Oracle GoldenGate Replicat process is similar to the structure of the Extract process. In the configuration file for the replicat process, the conditions of the tables being replicated are given by the MAP .. TARGET pattern. You should derive these entries from the DBA_STREAMS_RULES view for the streams_type APPLY.

REPLICAT JOHN_REP– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = “UTF8”)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_REP.dsc, APPEND
— Use ASSUMETARGETDEFS when the source and target tables specified with a MAP
— statement have the same column structure, such as when synchronizing a hot
— site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.
ASSUMETARGETDEFS
— Use MAP to establish a relationship between one or more source and target
— tables
MAP siebel.*, TARGET siebel.*;

 

create the Oracle GoldenGate replicat process, but you should not start it, yet

GGSCI> ADD REPLICAT JOHN_REP exttrail ./dirdat/AAGGSCI> INFO REPLICAT JOHN_REP

 

Transition from Oracle Streams to Oracle GoldenGate 

If have successfully installed and configured Oracle GoldenGate at both the source and target sites, then it’s ready to begin transitioning from Oracle Streams to Oracle GoldenGate

Check for Open Transaction

Before switch over from Streams to GoldenGate you need to make sure that all open transactions are captured by the GoldenGate Extract process. The best way to do this is to check the ggserr.log file for the timestamp of when the Extract process was added

SELECT vs.username, vs.program,vs.machine,vt.xidusn, vt.xidslot,vt.xidsqn
,vt.start_time, TO_CHAR(SYSDATE,’YYYY-MM-DD HH24:MI:SS’)
FROM gv$transaction vt, gv$session vs
WHERE vt.addr = vs.taddr
ORDER BY start_time;

 

The following query for the Streams Apply process on the target will display the source transaction IDs that are being applied.

selectstreams_name,
STREAMS_TYPE,
to_char(FIRST_MESSAGE_TIME,’YYYY-MM-DD HH24:MI:SS’) Start_Time,
xidusn,
xidslt,
xidsqn,
transaction_ID
from V$STREAMS_TRANSACTION where STREAMS_TYPE = ‘APPLY’;

 

Before you stop the Streams Apply process you need to make sure all of the transactions are covered by either the Streams or GoldenGate process. Check the V$TRANSACTION, V$STREAMS_APPLY_COORDINATOR and V$STREAMS_TRANACTION views for transaction start times.

Check on the apply side for the last transaction applied

SQL>SELECT LWM_MESSAGE_NUMBER,to_char(LWM_MESSAGE_CREATE_TIME,’YYYY-MM-DD HH24:MI:SS’) Message_Time from V$STREAMS_APPLY_COORDINATOR;

 

Stops the Oracle Streams Apply Process at the Target System

At the target side, you should cleanly stop the Streams Apply process using the STOP_APPLY procedure in the DBMS_APPLY_ADM package. By default, the process will stop once the current transaction has completed.

SQL> BEGINDBMS_APPLY_ADM.STOP_APPLY(‘STREAM_DEMO_APPLY’);
END;
/

SQL> SELECT APPLY_NAME, STATUS, MAX_APPLIED_MESSAGE_NUMBER FROM dba_apply WHERE apply_name = ‘STREAM_DEMO_APPLY’;

 

You should check that there are no issues with the Streams Apply process. So you should also check that there are no Streams Apply errors in the DBA_APPLY_ERROR view:

SQL> SELECT COUNT(*) FROM dba_apply_error WHERE apply_name = ‘STREAM_DEMO_APPLY’; 
COUNT(*)
———-
0

 

Find the Last Applied System Change Number

If the Streams Apply process is stopped and there are no errors, you need to find the last applied change (SCN) from the replication system by querying the DBA_APPLY_PROGRESS view at the target system:

SQL> SELECT APPLIED_MESSAGE_NUMBER  FROM dba_apply_progressWHERE apply_name = ‘STREAM_DEMO_APPLY’;
 
APPLIED_MESSAGE_NUMBER
———————-
2234450777(This going to be a AFTERCSN for Goldengate replicat process)

 

Start the Oracle GoldenGate Replicat Process at the Target System

GGSCI> START REPLICAT JOHN_REP AFTERCSN 2234450777 GGSCI> INFO REPLICAT JOHN_REP
GGSCI> STATS REPLICAT JOHN_REP

 

Stop the Oracle Streams Capture Process at the Source System

SQL> BEGINDBMS_CAPTURE_ADM.STOP_CAPTURE(‘JOHN_CAP’);
END;
SQL> SELECT capture_name,status FROM dba_capture
WHERE capture_name = ‘JOHN_CAP’
SQL> BEGIN
DBMS_PROPAGATION_ADM.STOP_PROPAGATION(‘JOHN_PROB’);
END;
/
SQL> SELECT propagation_name,status FROM dba_propagation
WHERE propagation_name = ‘JOHN_PROB’;

 

Deconfiguring Oracle Streams

At this time, the Oracle GoldenGate processes are running and are responsible for replicating information from the source to the target system. The Streams processes are stopped only. It is considered a best practice to remove only the processes that you are replacing with GoldenGate processes as outlined below. At the very end, you might also remove the Streams administrator user.

On the Source System:

SQL> BEGIN  dbms_capture_adm.drop_capture(‘JOHN_CAP’,drop_unused_rule_sets=>true);END;
/

SQL> BEGIN  dbms_propagation_adm.drop_propagation(‘JOHN_PROB’,drop_unused_rule_sets=>true);
END;
/

SQL> BEGIN  dbms_streams_adm.remove_queue (‘STREAMS_QUEUE_CP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER STRMADMIN CASCADE;

 

On the Target System: 

SQL> BEGIN dbms_apply_adm.delete_all_errors(‘STREAM_DEMO_APPLY’ );END;
/
SQL> BEGIN dbms_apply_adm.drop_apply(‘STREAM_DEMO_APPLY’ ,drop_unused_rule_sets=>true);
END;
/

SQL>BEGIN dbms_streams_adm.remove_queue(‘STREAMS_QUEUE_AP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER johnstream CASCADE;


We got some insider information from Oracle Product managers on possible features for the next generation of Exadata X4 systems that will be released soon hopefully later in 2013 or 2014. Please note this information may change on actual release from Oracle.

  • Oracle X4-2 and X4-8 if Oracle keeps the same name
  • Will now support Oracle Virtual Machine – OVM.
  • X4-8(4 CPUS only due to NUMA constraints) and X4-2(2 CPUS)
  • 10 to 12 cores per CPU still not confirmed
  • Up to 1TB of RAM
  • Oracle In-Memory DB option for 12c will run on Exadata X4

 


Oracle’s Flagship Product The SuperCluster M6-32

 

Another update from Oracle Open World 2013, Oracle has announced their Flagship Product the Supercluster M6-32 this latest version is their fastest in memory system yet on the Sparc chipset. The hardware specs are very impressive and you can combine both Database and Applications all together on one box, perfect for consolidation.

oracle_supercluster

 
  • Oracle’s fastest and most scalable engineered system
  • SuperCluster on Sparc is the fastest chipset on the market – beat IBM P Series
  • Ideal for running mission-critical database and applications in memory and consolidating the largest workloads
  • With up to 32 TB of Memory
  • 32 Processors, 12 cores/CPU, 384 cores total
  • Can scale out and add more storage and servers
  • OLTP, Datawarehousing
  • Complex applications
  • Supports In-memory databases
  • Applications and Databases can be run simultaneously together
  • Will Support Solaris Containers or Zoning

 


				

Finally an In-Memory Option in the Oracle RDBMS Database software. At this years Oracle Open World 2013 Larry Ellison talked about a new upcoming feature for Oracle Database 12c – the in-memory database option – that’s going to allow simultaneous row-level storage (just like we’ve always stored data) and column-level storage (essentially as an in-memory object structure) which will make the need for non-selective, non-PK indexes irrelevant.

This new feature will be quite simple to implement; all we’ll need to do is set a new initialization parameter (inmemory_size) to the appropriate size based on the available memory on your server. During Larry Demo query performance improvement peaked at 1390X for several Billion rows queried! This looks like it may give Exadata some competition, this remains to be seen.

  • Feature can be used on Traditional Database Servers.
  • 100X Faster Queries: Real-Time Analytics
  • Get near Instantaneous query results
  • Querying OLTP database or datawarehouse 
  • 2X Increase Updates and Deletes
  • Insert rows 3X to 4X faster 
  • Join tables up to 10x faster
  • Data stored in BOTH row and column format 
  • Less Tuning and Indexes required 
  • No SQL or Application changes