MIGRATING ORACLE STREAMS TO ORACLE GOLDENGATE

Steps:

  • Install Oracle GoldenGate software on source and target system
  • Create the Oracle GoldenGate Manager process on source and target system
  • Configure supplemental logging
  • Create and start the Oracle GoldenGate Extract at the source system
  • Create the Oracle GoldenGate Replicat process at the target system
  • Keep the Oracle GoldenGate Replicat process stopped until the switchover

There are multiple ways to perform the transition from Oracle Streams to Oracle GoldenGate. This information looks at one method of performing the conversion, which consists of the following steps

  • Check for open transactions and Oracle Streams errors.
  • Stop Oracle Streams Apply process at target system
  • Find last applied SCN from Oracle Streams Replication at the target system
  • Start Oracle Golden Gate Replicat process using the AFTERCSN command
  • Stop Oracle Streams Capture process and Oracle Streams Propagation at source

The final step in the migration process consists of deconfiguring your Oracle Streams environment, as follows:

· Remove Oracle Streams Configuration at source and target

Install Oracle GoldenGate Software

Before beginning the migration, you must first install and configure the Oracle GoldenGate software at the source and target locations.

Create Oracle GoldenGate Manager Process

The Oracle GoldenGate Manager process is the main process under which the Extract and Replicat processes exist. The Manager process itself must contain at least a port that is used for the communication. For maintenance purposes, it is useful to include additional parameters to address restart, report or purge options

— Base configuration of Mgr — Use PORT to specify a TCP/IP port number for the Manager process on which to
— interact with remote processes requesting dynamic services, typically either an
— initial-load Replicat or the Collector process.
PORT 7809
— Use LAGREPORTMINUTES and LAGREPORTHOURS to specify the interval at which
— Manager checks for Extract and Replicat lag.
LAGREPORTMINUTES
 
— Use LAGCRITICALSECONDS, LAGCRITICALMINUTES, or LAGCRITICALHOURS to specify a lag
— threshold that is considered critical and to force a warning message to the error LAGCRITICALMINUTES 5

 

Configure Supplemental Logging and Language Settings

Both Oracle Streams Capture and Oracle GoldenGate Extract processes require supplemental logging of redo information to insure they have sufficient information to support chained rows and various storage arrangements. In order to switch over to GoldenGate we will need to use the ADD SCHEMATRANDATA command to make sure that GoldenGate has the correct supplemental logging in place for GoldenGate. Due to differences between the way Streams and GoldenGate adds supplemental logging it is a requirement that the ADD SCHEMATRANDATA command be used in adding supplemental logging.

To insure appropriate supplemental logging in your GoldenGate environment, you must issue the following command using GGSCI:

GGSCI> ADD SCHEMATRANDATA siebel

 

This will add the appropriate logging for all tables in your schema. If you are replicating only a small number of tables, you may prefer to add logging for each table individually using the command shown below

GGSCI> ADD TRANDATA Siebel.s_org_ext

 

Because GoldenGate supports more data types than Streams, you should check your current Streams environment for any unsupported data types and columns. Streams may automatically skip unsupported data types. If you have tables with data types that are unsupported in Streams, but are supported in GoldenGate, you will need to manually re-instantiate the tables on the target

To ensure accurate character representation from one database to another, the following must be true:

The character set of the target database must be a superset of the character set of the source database. If the client applications use different character sets, the database character set must be a superset of the character sets of the client applications. In this configuration, every character is represented when converting from a client character set to the database character set.

use the following query to set the value of NLS_LANG on both the source and target system in the GoldenGate parameter files

select ‘SETENV (NLS_LANG= “‘||LANG.value||’_’||TERR.value||’.’||CHAR1.value||'”)'”O.S. Setting”
FROM NLS_DATABASE_PARAMETERS LANG,
NLS_DATABASE_PARAMETERS TERR,
NLS_DATABASE_PARAMETERS CHAR1
where LANG.parameter = ‘NLS_LANGUAGE’
and TERR.parameter = ‘NLS_TERRITORY’
and CHAR1.parameter = ‘NLS_CHARACTERSET’;

 

To avoid missing any data during the conversion it is a best practice to change the COMMIT_SERIALIZATION to FULL. This will make sure that when Streams is stopped you will have a consistent point to start the GoldenGate Replicat process. When you perform the conversion, you should run the Streams configuration with the PARALLELISM of the Apply process set to 1 and/or the COMMIT_SERIALIZATION set to FULL

SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters2 WHERE apply_name = ‘STREAM_DEMO_APPLY’

3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);

APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION DEPENDENT_TRANSACTIONS
STREAM_DEMO_APPLY PARALLELISM 4
SQL>
SQL>
SQL> BEGIN
2 DBMS_APPLY_ADM.SET_PARAMETER
3 (apply_name => ‘STREAM_DEMO_APPLY’
4 ,parameter => ‘COMMIT_SERIALIZATION’
5 ,value => ‘FULL’
6 );
7 END;
8 /
PL/SQL procedure successfully completed.
SQL>
SQL> SELECT apply_name, parameter, value FROM dba_apply_parameters
2 WHERE apply_name = ‘STREAM_DEMO_APPLY’
3 AND parameter IN (‘PARALLELISM’, ‘COMMIT_SERIALIZATION’);
APPLY_NAME PARAMETER VALUE
——————– ——————– ————————-
STREAM_DEMO_APPLY COMMIT_SERIALIZATION FUL

 

Create and Start the Oracle GoldenGate Extract Process at the Source System

 The Oracle GoldenGate Extract process parameter file keeps information about process specific parameters (process name, database connectivity, discard file, trail files) and object specific parameters.

EXTRACT JOHN_EXT– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = <“UTF8”>)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_EXT.dsc, APPEND
— Use EXTTRAIL to specify a trail on the local system. The implementation of this
— parameter varies slightly, depending on the process.
EXTTRAIL ./dirdat/aa
— Use REPORTCOUNT to generate a count of records that have been processed since
— the Extract or Replicat process started
REPORTCOUNT EVERY <n> HOURS, RATE
— Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetches
FETCHOPTIONS, USESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT
–Use the STATOPTIONS parameter to specify information to be included in statistical
— displays generated by the STATS EXTRACT or STATS REPLICAT command
STATOPTIONS REPORTFETCH
— Use WARNRATE to set the rate at which SQL errors encountered during replication
— are reported. WARNRATE issues a warning after the specified number of errors
— generated on a particular target table. If many errors are expected but tolerable,
— WARNRATE helps minimize the size of the report file and error log.
WARNLONGTRANS 1H, CHECKINTERVAL 15m
— The implementation of this parameter varies depending on the process.
TABLE SIEBEL.*;

 

Data Pump process 

The configuration of Oracle GoldenGate Data Pump corresponds to Oracle Streams propagation. To use PASSTHRU mode, the names of the source and target objects must be identical. In PASSTHRU mode, the Extract process does not look up table definitions, either from the database or from a data definitions file.

extract JOHN_PMP– Extract parameter PASSTHRU enables a data-pump Extract to bypass table
— definitions lookups
PASSTHRU
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
discardfile ./dirrpt/JOHN_PMP.dsc, append
— Use RMTHOST to identify a remote system and the TCP/IP port number on that
— system where the Manager process is running.
rmthost dnvdev1, mgrport 7809
— Use RMTTRAIL to specify a remote trail that was created with the ADD RMTTRAIL
— command in GGSCI.
rmttrail ./dirdat/AA
— The implementation of this parameter varies depending on the process.
Table siebel.*;

 

 

Adding and Starting Extract and Data Pump

You can now start the Extract and Data Pump processes by using the default parameters that are written in the ./dirprm directory

— Start Extract (capture)GGSCI> ADD EXTRACT JOHN_EXT, TRANLOG, BEGIN NOW
GGSCI> ADD EXTTRAIL ./dirdat/AA, EXTRACT JOHN_EXT, MEGABYTES 500
— start extract (pump)
GGSCI> ADD EXTRACT JOHN_PMP, BEGIN NOW, EXTTRAILSOURCE ./dirdat/AA
GGSCI> ADD RMTTRAIL ./dirdat/AA, EXTRACT JOHN_PMP, MEGABYTES 500
–check Extract status
GGSCI> START EXTRACT JOHN_EXT
GGSCI> START EXTRACT JOHN_PMP
GGSCI> INFO EXTRACT JOHN_EXT
GGSCI> INFO EXTRACT JOHN_PMP

You can check that the Oracle GoldenGate Extract process is writing the committed transactions to the trail file and that this information is being propagated to the remote trail files by using the STATS command from the GGSCI interface. Alternatively, from within the LOGDUMP interface, you can use the COUNT command:

— At source system:GGSCI> STATS EXTRACT EXT_01
LOGDUMP> COUNT ./dirdat/AA000000
— At target system:
LOGDUMP> COUNT ./dirdat/AA000000

 

Create the Oracle GoldenGate Replicat Process at the Target System

The structure of the Oracle GoldenGate Replicat process is similar to the structure of the Extract process. In the configuration file for the replicat process, the conditions of the tables being replicated are given by the MAP .. TARGET pattern. You should derive these entries from the DBA_STREAMS_RULES view for the streams_type APPLY.

REPLICAT JOHN_REP– Use SETENV to set a value for any UNIX environment variable, for example
— setting database environments such as the ORACLE_HOME and ORACLE_SID.
SETENV (NLS_LANG = “UTF8”)
— Use USERID to specify the type of database authentication for GoldenGate to use.
USERID ggs_admin password <ggs_pwd>
— Use DISCARDFILE to generate a discard file to which Extract or Replicat can log
— records that it cannot process. GoldenGate creates the specified discard file in
— the dirrpt sub-directory of the GoldenGate installation directory. You can use the
— discard file for problem-solving.
DISCARDFILE ./dirrpt/JOHN_REP.dsc, APPEND
— Use ASSUMETARGETDEFS when the source and target tables specified with a MAP
— statement have the same column structure, such as when synchronizing a hot
— site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.
ASSUMETARGETDEFS
— Use MAP to establish a relationship between one or more source and target
— tables
MAP siebel.*, TARGET siebel.*;

 

create the Oracle GoldenGate replicat process, but you should not start it, yet

GGSCI> ADD REPLICAT JOHN_REP exttrail ./dirdat/AAGGSCI> INFO REPLICAT JOHN_REP

 

Transition from Oracle Streams to Oracle GoldenGate 

If have successfully installed and configured Oracle GoldenGate at both the source and target sites, then it’s ready to begin transitioning from Oracle Streams to Oracle GoldenGate

Check for Open Transaction

Before switch over from Streams to GoldenGate you need to make sure that all open transactions are captured by the GoldenGate Extract process. The best way to do this is to check the ggserr.log file for the timestamp of when the Extract process was added

SELECT vs.username, vs.program,vs.machine,vt.xidusn, vt.xidslot,vt.xidsqn
,vt.start_time, TO_CHAR(SYSDATE,’YYYY-MM-DD HH24:MI:SS’)
FROM gv$transaction vt, gv$session vs
WHERE vt.addr = vs.taddr
ORDER BY start_time;

 

The following query for the Streams Apply process on the target will display the source transaction IDs that are being applied.

selectstreams_name,
STREAMS_TYPE,
to_char(FIRST_MESSAGE_TIME,’YYYY-MM-DD HH24:MI:SS’) Start_Time,
xidusn,
xidslt,
xidsqn,
transaction_ID
from V$STREAMS_TRANSACTION where STREAMS_TYPE = ‘APPLY’;

 

Before you stop the Streams Apply process you need to make sure all of the transactions are covered by either the Streams or GoldenGate process. Check the V$TRANSACTION, V$STREAMS_APPLY_COORDINATOR and V$STREAMS_TRANACTION views for transaction start times.

Check on the apply side for the last transaction applied

SQL>SELECT LWM_MESSAGE_NUMBER,to_char(LWM_MESSAGE_CREATE_TIME,’YYYY-MM-DD HH24:MI:SS’) Message_Time from V$STREAMS_APPLY_COORDINATOR;

 

Stops the Oracle Streams Apply Process at the Target System

At the target side, you should cleanly stop the Streams Apply process using the STOP_APPLY procedure in the DBMS_APPLY_ADM package. By default, the process will stop once the current transaction has completed.

SQL> BEGINDBMS_APPLY_ADM.STOP_APPLY(‘STREAM_DEMO_APPLY’);
END;
/

SQL> SELECT APPLY_NAME, STATUS, MAX_APPLIED_MESSAGE_NUMBER FROM dba_apply WHERE apply_name = ‘STREAM_DEMO_APPLY’;

 

You should check that there are no issues with the Streams Apply process. So you should also check that there are no Streams Apply errors in the DBA_APPLY_ERROR view:

SQL> SELECT COUNT(*) FROM dba_apply_error WHERE apply_name = ‘STREAM_DEMO_APPLY’; 
COUNT(*)
———-
0

 

Find the Last Applied System Change Number

If the Streams Apply process is stopped and there are no errors, you need to find the last applied change (SCN) from the replication system by querying the DBA_APPLY_PROGRESS view at the target system:

SQL> SELECT APPLIED_MESSAGE_NUMBER  FROM dba_apply_progressWHERE apply_name = ‘STREAM_DEMO_APPLY’;
 
APPLIED_MESSAGE_NUMBER
———————-
2234450777(This going to be a AFTERCSN for Goldengate replicat process)

 

Start the Oracle GoldenGate Replicat Process at the Target System

GGSCI> START REPLICAT JOHN_REP AFTERCSN 2234450777 GGSCI> INFO REPLICAT JOHN_REP
GGSCI> STATS REPLICAT JOHN_REP

 

Stop the Oracle Streams Capture Process at the Source System

SQL> BEGINDBMS_CAPTURE_ADM.STOP_CAPTURE(‘JOHN_CAP’);
END;
SQL> SELECT capture_name,status FROM dba_capture
WHERE capture_name = ‘JOHN_CAP’
SQL> BEGIN
DBMS_PROPAGATION_ADM.STOP_PROPAGATION(‘JOHN_PROB’);
END;
/
SQL> SELECT propagation_name,status FROM dba_propagation
WHERE propagation_name = ‘JOHN_PROB’;

 

Deconfiguring Oracle Streams

At this time, the Oracle GoldenGate processes are running and are responsible for replicating information from the source to the target system. The Streams processes are stopped only. It is considered a best practice to remove only the processes that you are replacing with GoldenGate processes as outlined below. At the very end, you might also remove the Streams administrator user.

On the Source System:

SQL> BEGIN  dbms_capture_adm.drop_capture(‘JOHN_CAP’,drop_unused_rule_sets=>true);END;
/

SQL> BEGIN  dbms_propagation_adm.drop_propagation(‘JOHN_PROB’,drop_unused_rule_sets=>true);
END;
/

SQL> BEGIN  dbms_streams_adm.remove_queue (‘STREAMS_QUEUE_CP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER STRMADMIN CASCADE;

 

On the Target System: 

SQL> BEGIN dbms_apply_adm.delete_all_errors(‘STREAM_DEMO_APPLY’ );END;
/
SQL> BEGIN dbms_apply_adm.drop_apply(‘STREAM_DEMO_APPLY’ ,drop_unused_rule_sets=>true);
END;
/

SQL>BEGIN dbms_streams_adm.remove_queue(‘STREAMS_QUEUE_AP’,cascade=>true,drop_unused_queue_table=>true);
END;
/

SQL> DROP USER johnstream CASCADE;


We got some insider information from Oracle Product managers on possible features for the next generation of Exadata X4 systems that will be released soon hopefully later in 2013 or 2014. Please note this information may change on actual release from Oracle.

  • Oracle X4-2 and X4-8 if Oracle keeps the same name
  • Will now support Oracle Virtual Machine – OVM.
  • X4-8(4 CPUS only due to NUMA constraints) and X4-2(2 CPUS)
  • 10 to 12 cores per CPU still not confirmed
  • Up to 1TB of RAM
  • Oracle In-Memory DB option for 12c will run on Exadata X4

 


Oracle’s Flagship Product The SuperCluster M6-32

 

Another update from Oracle Open World 2013, Oracle has announced their Flagship Product the Supercluster M6-32 this latest version is their fastest in memory system yet on the Sparc chipset. The hardware specs are very impressive and you can combine both Database and Applications all together on one box, perfect for consolidation.

oracle_supercluster

 
  • Oracle’s fastest and most scalable engineered system
  • SuperCluster on Sparc is the fastest chipset on the market – beat IBM P Series
  • Ideal for running mission-critical database and applications in memory and consolidating the largest workloads
  • With up to 32 TB of Memory
  • 32 Processors, 12 cores/CPU, 384 cores total
  • Can scale out and add more storage and servers
  • OLTP, Datawarehousing
  • Complex applications
  • Supports In-memory databases
  • Applications and Databases can be run simultaneously together
  • Will Support Solaris Containers or Zoning

 


				

Finally an In-Memory Option in the Oracle RDBMS Database software. At this years Oracle Open World 2013 Larry Ellison talked about a new upcoming feature for Oracle Database 12c – the in-memory database option – that’s going to allow simultaneous row-level storage (just like we’ve always stored data) and column-level storage (essentially as an in-memory object structure) which will make the need for non-selective, non-PK indexes irrelevant.

This new feature will be quite simple to implement; all we’ll need to do is set a new initialization parameter (inmemory_size) to the appropriate size based on the available memory on your server. During Larry Demo query performance improvement peaked at 1390X for several Billion rows queried! This looks like it may give Exadata some competition, this remains to be seen.

  • Feature can be used on Traditional Database Servers.
  • 100X Faster Queries: Real-Time Analytics
  • Get near Instantaneous query results
  • Querying OLTP database or datawarehouse 
  • 2X Increase Updates and Deletes
  • Insert rows 3X to 4X faster 
  • Join tables up to 10x faster
  • Data stored in BOTH row and column format 
  • Less Tuning and Indexes required 
  • No SQL or Application changes

 

 

 

 

 


If you saw the first FlexASM blog you know we installed and configured FlexASM and a CDB plus a couple of PDBs. Also, this was Policy Managed with a cardinality of 2.  Now let's see what the configuration looks like, and we can break it down using the wonderful crsctl and srvctl tools

First let's ensure we are really running in FlexASM mode:

[oracle@rac02 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled


[oracle@rac02 ~]$ srvctl status   serverpool -serverpool naboo
Server pool name: naboo
Active servers count: 2



[oracle@rac01 trace]$ crsctl get node role status -all
Node 'rac01' active role is 'hub'
Node 'rac03' active role is 'hub'
Node 'rac02' active role is 'hub'
Node 'rac04' active role is 'hub'



[oracle@rac01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE  


You notice that we have 4 ASM listeners one on each node in the Cluster.  You'll see the process as the following on each node:


[oracle@rac01 ~]$ ps -ef |grep -i asmnet

ooracle    6646     1  0 12:19 ?        00:00:00 /u01/app/12.1.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit



ora.CRSDATA.DATAVOL1.advm
               ONLINE  ONLINE       rac01                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
               ONLINE  ONLINE       rac02                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
               ONLINE  OFFLINE      rac03                    Unable to connect to
                                                             ASM,STABLE
               ONLINE  ONLINE       rac04                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
The datavol1 ADVM resource runs on all the nodes where indicated it should run.  In this case we are seeing that RAC03 is having some issues.
Let's look into that a little later.  But I like the fact crsctl tells something is amiss here on node3
 
ora.CRSDATA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


ora.FRA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


The crsdata and fra disk groups resource is started on all nodes except node 4



ora.LISTENER.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


We all know, as in 11gR2, that this is the Node listener.


ora.PDBDATA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


The pdbdata disk groups resource is started on all nodes except node 4



ora.crsdata.datavol1.acfs
               ONLINE  ONLINE       rac01                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE
               ONLINE  ONLINE       rac02                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE
               ONLINE  OFFLINE      rac03                    (2) volume /u02/app/
                                                             oracle/acfsmounts of
                                                             fline,STABLE
               ONLINE  ONLINE       rac04                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE


ACFS filesystem resource for datavol1 is started on all nodes except node3.
But I think the following has something to do w/ it :-).  Need to debug this a bit later.  I even tried:
[oracle@rac03 ~]$ asmcmd volenable --all
ASMCMD-9470: ASM proxy instance unavailable
ASMCMD-9471: cannot enable or disable volumes



ora.net1.network
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE
ora.ons
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


The Network (in my case I only have only Net1) and ONS are same as in previous versions


ora.proxy_advm
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  OFFLINE      rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


Yep, since proxy_advm is not started on node3, the filesystems won't come online….but again, i'll look at that later
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac02                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac03                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac04                    STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac01                    169.254.90.36 172.16
                                                             .11.10,STABLE
ora.asm
      1        ONLINE  ONLINE       rac03                    STABLE
      2        ONLINE  ONLINE       rac01                    STABLE
      3        ONLINE  ONLINE       rac02                    STABLE


Since we have the cardinality of 3 ASM instance we have 3 ASM resources active


ora.cvu
      1        ONLINE  ONLINE       rac01                    STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac01                    Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac01.vip
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac02.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.rac03.vip
      1        ONLINE  ONLINE       rac03                    STABLE
ora.rac04.vip
      1        ONLINE  ONLINE       rac04                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac03                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac04                    STABLE
ora.tatooine.db
      1        ONLINE  ONLINE       rac01                    Open,STABLE
      2        ONLINE  ONLINE       rac02                    Open,STABLE


As we stated above, I specified a Policy Managed database with cardinality of 2, so I have 2 database instances running
--------------------------------------------------------------------------------

Here's some other important supporting info on FlexASm:


[oracle@rac02 ~]$ srvctl config asm -detail
ASM home: /u01/app/12.1.0/grid
Password file: +CRSDATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM


[oracle@rac02 ~]$ srvctl status  filesystem
ACFS file system /u02/app/oracle/acfsmounts is mounted on nodes rac01,rac02,rac04


ANd here's what the Database has to say about FlexASM

NOTE: ASMB registering with ASM instance as client 0x10001 (reg:1377584805)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10001)
NOTE: ASMB rebuilding ASM server state
NOTE: ASMB rebuilt 2 (of 2) groups
SUCCESS: ASMB reconnected & completed ASM server state


So for the interesting part:
If you notice that ASM is not running node 4:
[oracle@rac02 ~]$ srvctl status  asm -v

ASM is running on rac01,rac02,rac03
[oracle@rac02 ~]$ srvctl status  asm -detail
ASM is running on rac01,rac02,rac03



So, how does a client (ocrdump, rman, asmcmd, etc..) connect to if ASM if there is no ASM on that node.  Well let's test this using asmcmd on node4.  You notice that a pipe is created, a connect string is generated and passed to ASMCMD to connect remotely to ASM2 on node2!!!!


22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Pipe /tmp/pipe_14106 has been found.
22-Sep-13 12:54 ASMCMD Background (PID = 14117):  Successfully opened the pipe /tmp/pipe_14106
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Successfully opened the pipe /tmp/pipe_14106 in read mode
NOTE: Executing kfod /u01/app/12.1.0/grid/bin/kfod op=getclstype..
22-Sep-13 12:54 Printing the connection string
contype = 
driver = 
instanceName = <>
usr = 
ServiceName = <+ASM>
23-Sep-13 16:23 Successfully connected to ASM instance +ASM2
23-Sep-13 16:23 NOTE: Querying ASM instance to get list of disks
22-Sep-13 12:54 Registered Daemon process.
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Closed pipe /tmp/pipe_14106.



Due to so many people asking me other methods besides SQLplus for provisioning PDBs; such as OEM, DBCA, etc. In this blog entry I’ll DBCA, just because its simple to show. As I mentioned in my last PDB blog,
the installer DBCA (initial DBA invocation) looks different than the subsequent (post initial db creation).

The main DBCA screen shows the following pages. We will choose Manage Pluggable Database

PDB12c 2013 08 20 17 46 20

Choose the CDB, Note you could have many CDBs on the same Node or RAC cluster

PDB12c 2013 08 30 17 51 59

We choose our PDB that we created in Part 1 of the blog

PDB12c 2013 08 30 17 52 39

Ahh..we gotta open the PDB first. As before:

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

PDB12c 2013 08 30 17 54 22

Now we can Add support for and configure Database Vault. Additionally, Label Security can be configured.
It would have been nice to enable and modify Resource Manager as well other PDB tasks.
But I get the fact that this DBCA is really driven for the PDB operations (plug,unplug, create and destroy PDB).
Bulk of the PDB admin tasks are provided in EM

PDB12c 2013 08 30 18 14 54

Let’s do a new PDB creation for grins 🙂

PDB12c 2013 08 30 18 21 01

Specify the PDB name, storage location, and a default tablespace. Again, it would have been nice to specify a TEMP tablespace too, but that was left out

PDB12c 2013 08 30 18 22 26

Progress ….

PDB12c 2013 08 30 18 23 18

And Completion….Pretty Straightforward

PDB12c 2013 08 30 18 22 53


Once we have installed 12.1 Database Software, we can create the Container Database and the Pluggable Databases. In my case I did a software only install then manually executed DBCA

In this blog entry I’ll show the screens that walk-thru the configuration of the “first” database. I noticed that once DBCA is used to create the initial database, the capability and options (screens) for DBCA are different; i.e., it much more aligned to create/manage additional databases. I’ll show those screens in Part 3 of PDB

So let’s get started by executing
$ $ORACLE_HOME/bin/dbca

Rac01 2013 09 15 22 39 12

Choose Advanced mode for Policy Managed Database or use “Default Configuration”. Being a big promoter of Policy Managed Databases and since I have 4 RAC nodes (my best practice threshold to choose Policy Managed), I’ll choose, that.

Rac01 2013 09 15 22 39 44

I’ll pick a Global Database name and choose PDB option, and also option to choose how many PDBs to create (with prefix)

Rac01 2013 09 15 22 40 30

Pick a Server Pool name, I chose a cardinality of 2

Rac01 2013 09 15 22 40 58

Define the Management Options

Rac01 2013 09 15 22 41 20

Choose the Storage locations

Rac01 2013 09 15 22 43 31

Define Database Vault Owner and also the Separate Account Manager. Note the user name definitions

Rac01 2013 09 15 22 45 31

And now the finish

Rac01 2013 09 15 22 56 41


Oracle Flashback Guaranteed Restore Point Misunderstanding

Nabil Nawaz, Viscosity NA

 

Recently I had a conversation with a DBA that I work with regarding Guaranteed Restore Points (GRP) in Oracle 11gR2. They had the understanding that if a GRP was created regardless if the database flashback feature was on or off then it would only be created with only the flashback logs necessary to flashback or rewind the database back to the GRP point in time. In other words the flashback logs would never grow in size for a GRP since they did not believe those would be needed by Oracle. This conversation came up when they had to wait several minutes to drop a GRP in the database, they did not expect it to take so long since there should only be a limited amount of flashback logs created for the GRP that would need to be dropped.In reality their understanding was not correct at all and here is the reason why, please see below the supporting information for this.

For example if you create a new GRP in your database then the view v$restore_point will be populated as follows, please note the STORAGE_SIZE column which I rounded up to Megabytes(MB) and in this case a new GRP is only 6MB.

GUARANTEE_FLASHBACK_DATABASE STORAGE_SIZE(MB) TIME NAME
YES

6

8/16/2013 11:24

BEFORE_RELEASE

 

Let’s take a look at another GRP that has existed for some time in another database, please note I rounded the STORAGE_SIZE column to Gigabytes(GB) and in this case the GRP is 75GB in size. This shows and proves that when a GRP is created it is small but as activity runs on the database it will grow and in fact ALL flashback logs will be retained since the GRP is created up until it is finally dropped.

 

GUARANTEE_FLASHBACK_DATABASE STORAGE_SIZE(GB) TIME NAME
YES

75

7/2/2013 16:43

BEFORE_REL_1

 

In the database alert log you will see the following output when dropping a GRP with lots of activity since it was created. You will see the message “Deleted Oracle managed file” repeatedly.

Fri Aug 16 11:21:17 2013

Drop guaranteed restore point BEFORE_RELEASE

Deleted Oracle managed file +RECO_EXAD/odssit/flashback/log_1.5780.819737031

Deleted Oracle managed file +RECO_EXAD/odssit/flashback/log_2.42344.819737033

Deleted Oracle managed file +RECO_EXAD/odssit/flashback/log_3.5651.819737037

Deleted Oracle managed file +RECO_EXAD/odssit/flashback/log_4.5322.819737041

Deleted Oracle managed file +RECO_EXAD/odssit/flashback/log_5.51038.819737285

 

Also per the Oracle Support Documentation which also states all flashback logs will be kept to satisfy the restore point.

“If you enable Flashback Database and define one or more guaranteed restore points, then the database performs normal flashback logging. In this case, the recovery area retains the flashback logs required to flash back to any arbitrary time between the present and the earliest currently defined guaranteed restore point. Flashback logs are not deleted in response to space pressure if they are required to satisfy the guarantee.”

Posted in 11g

Steps to enable the bpdufilter on a Cisco 4948 Switch for outside connectivity for Exadata X2

By Nabil Nawaz, Viscosity NA.

We have an Exadata X2 system we are supporting at a managed hosted Datacenter facility that is being supported by me and our company. One fine day in the datacenter the Juniper switch which allows the Exadata system to communicate to the outside world stopped working. Eventually we found out the hosting facility enabled the bpdufilter on the Juniper switch and in turn we needed to do the same setup on out Cisco switch.

Below is a diagram of the highlevel layout of our setup in our datacenter.

Exadata_switch

  • The Exadata X2 Database Machine connects first to the Cisco 4948 Switch.
  • The Cisco switch connects to the Juniper Switch provided by the hosting facility.
  • Juniper Switch is the gateway to outside internet traffic.

  

A BPDU filter what is that?

Bridge Protocol Data Unit’s known also as BPDU’s play a fundamental part in a spanning-tree topology.

The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, without the danger of bridge loops, or the need for manual enabling/disabling of these backup links.

BPDU’s are sent out by a switch to exchange information about bridge ID’s and costs of the root path. Exchanged at a frequency of every 2 seconds by default, BPDU’s allow switches to keep a track of network changes and when to block or forward ports to ensure a loop free topology. A BPDU filter disables spanning-tree which would result in the port to not participate in STP, and loops may occur.

For more information on Spanning Tree Protocol, please refer to the Wikipedia or Cisco documentation links below.

http://en.wikipedia.org/wiki/Spanning_Tree_Protocol

http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_55_se/configuration/guide/swstpopt.html#wp1002608

 

Commands to enable bpdu filter.

 

  • ·         Telnet to cisco switch

$ telnet IPADDRESS

  • ·         Enable commandline for switch

telnet> enable

 

  • ·         Prepare to configure switch.

ciscoswitch-ip# configure terminal

Enter configuration commands, one per line.  End with CNTL/Z.

ciscoswitch-ip(config)#interface GigabitEthernet1/48

ciscoswitch-ip(config-if)#

  • ·         Enable BPDU filter

ciscoswitch-ip(config-if)# spanning-tree bpdufilter enable

ciscoswitch-ip(config-if)# end

 

  • ·         Save the configuration to the startup configuration.

 

ciscoswitch-ip# copy running-config startup-config

Destination filename [startup-config]?

 

Building configuration…

Compressed configuration from 3889 bytes to 1546 bytes[OK]

ciscoswitch-ip#reload

Proceed with reload? [confirm]

Connection closed by foreign host

 

  • ·         Verify the configuration and BPDU filter is enabled.

ciscoswitch-ip# show running-config

ciscoswitch-ip# show interfaces status

ciscoswitch-ip# show spanning-tree interface GigabitEthernet1/48 portfast

interface GigabitEthernet1/48

media-type rj45

spanning-tree bpdufilter enable