There cases where we need to ensure that large packet “address-ability” exists. This is needed to verify configuration for non standard packet sizes, i.e, MTU of 9000. For example if we are deploying a NAS or backup server across the network.

Setting the MTU can be done by editing the configuration script for the relevant interface in /etc/sysconfig/network-scripts/. In our example, we will use the eth1 interface, thus the file to edit would be ifcfg-eth1.

Add a line to specify the MTU, for example:
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.20.2
NETMASK=255.255.255.0
MTU=9000

Assuming that MTU is set on the system, just do a ifdown eth1 followed by ifup eth1.
An ifconfig eth1 will tell if its set correctly

eth1 Link encap:Ethernet HWaddr 00:0F:EA:94:xx:xx
inet addr:192.168.20.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:141567 errors:0 dropped:0 overruns:0 frame:0
TX packets:141306 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:101087512 (96.4 MiB) TX bytes:32695783 (31.1 MiB)
Interrupt:18 Base address:0xc000

To validate end-2-end MTU 9000 packet management

Execute the following on Linux systems:

ping -M do -s 8972 [destinationIP]
For example: ping datadomain.viscosityna.com -s 8972

The reason for the 8972 on Linux/Unix system, the ICMP/ping implementation doesn’t encapsulate the 28 byte ICMP (8) + TCP (20) (ping + standard transmission control protocol packet) header. Therefore, take in account : 9000 and subtract 28 = 8972.

[root@racnode01]# ping -s 8972 -M do datadomain.viscosityna.com
PING datadomain.viscosityna.com. (192.168.20.32) 8972(9000) bytes of data.
8980 bytes from racnode1.viscosityna.com. (192.168.20.2): icmp_seq=0 ttl=64 time=0.914 ms

To illustrate if proper MTU packet address-ability is not in place, I can set a larger packet size in the ping (8993). The packet gets fragmented you will see
“Packet needs to be fragmented by DF set”. In this example, the ping command uses ” -s” to set the packet size, and “-M do” sets the Do Not Fragment

[root@racnode01]# ping -s 8993 -M do datadomain.viscosityna.com
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.859/0.955/1.167/0.109 ms, pipe 2
PING datadomain.viscosityna.com. (192.168.20.32) 8993(9001) bytes of data.
From racnode1.viscosityna.com. (192.168.20.2) icmp_seq=0 Frag needed and DF set (mtu = 9000)

By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, e.g., the switch, source or target node, target or anything else inbetween.

Finally, another way to verify the correct usage of the MTU size is the command ‘netstat -a -i -n’ (the column MTU size should be 9000 when you are performing tests on Jumbo Frames)


We are about to apply 12102 PSU1 (19392646) on a Standalone Cluster. This patch applies the patch to RDBMS home as part of opatchauto.
Here’s the output in case anyone wants to compare:

[root@ol64afd OPatch]# ./opatchauto apply -analyze /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-03-29_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful
Command “/u01/app/oracle/product/12.1.0/database/OPatch/opatch version -oh /u01/app/oracle/product/12.1.0/database -invPtrLoc /u01/app/oracle/product/12.1.0/grid/oraInst.loc -v2c 12.1.0.1.5” execution failed:
bash: /u01/app/oracle/product/12.1.0/database/OPatch/opatch: No such file or directory

For more details, please refer to the log file “/u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-03-29_analyze.debug.log”.

Apply Summary:

Following patch(es) failed to be analyzed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604

opatchauto failed with error code 2.
[root@ol64afd OPatch]#

Need to copy 12.1.0.2.1 OPatch into Grid_HOME and DB_HOME. Now re-run

[root@ol64afd OPatch]# ./opatchauto apply -analyze /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-05-43_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful

Analyzing patch(es) on “/u01/app/oracle/product/12.1.0/database” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully analyzed on “/u01/app/oracle/product/12.1.0/database” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully analyzed on “/u01/app/oracle/product/12.1.0/database” for apply.

Analyzing patch(es) on “/u01/app/oracle/product/12.1.0/grid” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392590” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604

opatchauto succeeded.
[root@ol64afd OPatch]# ./opatchauto apply /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-07-28_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0/database) … Successful
Following database(s) and/or service(s) were stopped and will be restarted later during the session: yoda

Applying patch(es) to “/u01/app/oracle/product/12.1.0/database” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully applied to “/u01/app/oracle/product/12.1.0/database”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully applied to “/u01/app/oracle/product/12.1.0/database”.

Stopping CRS … Successful

Applying patch(es) to “/u01/app/oracle/product/12.1.0/grid” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392590” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.

Starting CRS … Successful

Starting RAC (/u01/app/oracle/product/12.1.0/database) … Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604


With busy weeks of IOUG and other conferences coming up, we have little time to blog…. So, in the coming weeks, I’m just going to do some “baby” blogs; i.e., some quick tips and new features

Here’s a new 12c new feature that simplifies snapshotting databases

Snapshot Optimized Recovery

There’s many of you that take snapshot copies of database, either via server-side snapshot tools or using storage level snapshots. Usually this required a cold database or putting the database in hot-backup mode. However, there are downsides to both options

In Oracle 12c, third-party snapshots technologies that meet the following requirements can be taken without requiring the database to be placed in backup mode:

Database is crash consistent at the point of the snapshot.
Write ordering is preserved for each file within a snapshot.
Snapshot stores the time at which a snapshot is completed.

The new RECOVER SNAPSHOT TIME command is introduced to recover a snapshot to a consistent point, without any additional manual procedures for point-in-time recovery needs.
This command performs the recovery in a single step. Recovery can be either to the current time or to a point in time after the snapshot was taken

Though there is a bit upfront overhead; e.g.,additional redo logging and a complete database checkpoint.


We got some insider information from Oracle Product managers on possible features for the next generation of Exadata X4 systems that will be released soon hopefully later in 2013 or 2014. Please note this information may change on actual release from Oracle.

  • Oracle X4-2 and X4-8 if Oracle keeps the same name
  • Will now support Oracle Virtual Machine – OVM.
  • X4-8(4 CPUS only due to NUMA constraints) and X4-2(2 CPUS)
  • 10 to 12 cores per CPU still not confirmed
  • Up to 1TB of RAM
  • Oracle In-Memory DB option for 12c will run on Exadata X4

 


If you saw the first FlexASM blog you know we installed and configured FlexASM and a CDB plus a couple of PDBs. Also, this was Policy Managed with a cardinality of 2.  Now let's see what the configuration looks like, and we can break it down using the wonderful crsctl and srvctl tools

First let's ensure we are really running in FlexASM mode:

[oracle@rac02 ~]$ asmcmd showclustermode
ASM cluster : Flex mode enabled


[oracle@rac02 ~]$ srvctl status   serverpool -serverpool naboo
Server pool name: naboo
Active servers count: 2



[oracle@rac01 trace]$ crsctl get node role status -all
Node 'rac01' active role is 'hub'
Node 'rac03' active role is 'hub'
Node 'rac02' active role is 'hub'
Node 'rac04' active role is 'hub'



[oracle@rac01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE  


You notice that we have 4 ASM listeners one on each node in the Cluster.  You'll see the process as the following on each node:


[oracle@rac01 ~]$ ps -ef |grep -i asmnet

ooracle    6646     1  0 12:19 ?        00:00:00 /u01/app/12.1.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit



ora.CRSDATA.DATAVOL1.advm
               ONLINE  ONLINE       rac01                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
               ONLINE  ONLINE       rac02                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
               ONLINE  OFFLINE      rac03                    Unable to connect to
                                                             ASM,STABLE
               ONLINE  ONLINE       rac04                    Volume device /dev/a
                                                             sm/datavol1-194 is o
                                                             nline,STABLE
The datavol1 ADVM resource runs on all the nodes where indicated it should run.  In this case we are seeing that RAC03 is having some issues.
Let's look into that a little later.  But I like the fact crsctl tells something is amiss here on node3
 
ora.CRSDATA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


ora.FRA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


The crsdata and fra disk groups resource is started on all nodes except node 4



ora.LISTENER.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


We all know, as in 11gR2, that this is the Node listener.


ora.PDBDATA.dg
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               OFFLINE OFFLINE      rac04                    STABLE


The pdbdata disk groups resource is started on all nodes except node 4



ora.crsdata.datavol1.acfs
               ONLINE  ONLINE       rac01                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE
               ONLINE  ONLINE       rac02                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE
               ONLINE  OFFLINE      rac03                    (2) volume /u02/app/
                                                             oracle/acfsmounts of
                                                             fline,STABLE
               ONLINE  ONLINE       rac04                    mounted on /u02/app/
                                                             oracle/acfsmounts,ST
                                                             ABLE


ACFS filesystem resource for datavol1 is started on all nodes except node3.
But I think the following has something to do w/ it :-).  Need to debug this a bit later.  I even tried:
[oracle@rac03 ~]$ asmcmd volenable --all
ASMCMD-9470: ASM proxy instance unavailable
ASMCMD-9471: cannot enable or disable volumes



ora.net1.network
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE
ora.ons
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  ONLINE       rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


The Network (in my case I only have only Net1) and ONS are same as in previous versions


ora.proxy_advm
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
               ONLINE  OFFLINE      rac03                    STABLE
               ONLINE  ONLINE       rac04                    STABLE


Yep, since proxy_advm is not started on node3, the filesystems won't come online….but again, i'll look at that later
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac02                    STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac03                    STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac04                    STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac01                    169.254.90.36 172.16
                                                             .11.10,STABLE
ora.asm
      1        ONLINE  ONLINE       rac03                    STABLE
      2        ONLINE  ONLINE       rac01                    STABLE
      3        ONLINE  ONLINE       rac02                    STABLE


Since we have the cardinality of 3 ASM instance we have 3 ASM resources active


ora.cvu
      1        ONLINE  ONLINE       rac01                    STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac01                    Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac01.vip
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac02.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.rac03.vip
      1        ONLINE  ONLINE       rac03                    STABLE
ora.rac04.vip
      1        ONLINE  ONLINE       rac04                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac03                    STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac04                    STABLE
ora.tatooine.db
      1        ONLINE  ONLINE       rac01                    Open,STABLE
      2        ONLINE  ONLINE       rac02                    Open,STABLE


As we stated above, I specified a Policy Managed database with cardinality of 2, so I have 2 database instances running
--------------------------------------------------------------------------------

Here's some other important supporting info on FlexASm:


[oracle@rac02 ~]$ srvctl config asm -detail
ASM home: /u01/app/12.1.0/grid
Password file: +CRSDATA/orapwASM
ASM listener: LISTENER
ASM is enabled.
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM


[oracle@rac02 ~]$ srvctl status  filesystem
ACFS file system /u02/app/oracle/acfsmounts is mounted on nodes rac01,rac02,rac04


ANd here's what the Database has to say about FlexASM

NOTE: ASMB registering with ASM instance as client 0x10001 (reg:1377584805)
NOTE: ASMB connected to ASM instance +ASM1 (Flex mode; client id 0x10001)
NOTE: ASMB rebuilding ASM server state
NOTE: ASMB rebuilt 2 (of 2) groups
SUCCESS: ASMB reconnected & completed ASM server state


So for the interesting part:
If you notice that ASM is not running node 4:
[oracle@rac02 ~]$ srvctl status  asm -v

ASM is running on rac01,rac02,rac03
[oracle@rac02 ~]$ srvctl status  asm -detail
ASM is running on rac01,rac02,rac03



So, how does a client (ocrdump, rman, asmcmd, etc..) connect to if ASM if there is no ASM on that node.  Well let's test this using asmcmd on node4.  You notice that a pipe is created, a connect string is generated and passed to ASMCMD to connect remotely to ASM2 on node2!!!!


22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Pipe /tmp/pipe_14106 has been found.
22-Sep-13 12:54 ASMCMD Background (PID = 14117):  Successfully opened the pipe /tmp/pipe_14106
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Successfully opened the pipe /tmp/pipe_14106 in read mode
NOTE: Executing kfod /u01/app/12.1.0/grid/bin/kfod op=getclstype..
22-Sep-13 12:54 Printing the connection string
contype = 
driver = 
instanceName = <>
usr = 
ServiceName = <+ASM>
23-Sep-13 16:23 Successfully connected to ASM instance +ASM2
23-Sep-13 16:23 NOTE: Querying ASM instance to get list of disks
22-Sep-13 12:54 Registered Daemon process.
22-Sep-13 12:54 ASMCMD Foreground (PID = 14106):  Closed pipe /tmp/pipe_14106.



Due to so many people asking me other methods besides SQLplus for provisioning PDBs; such as OEM, DBCA, etc. In this blog entry I’ll DBCA, just because its simple to show. As I mentioned in my last PDB blog,
the installer DBCA (initial DBA invocation) looks different than the subsequent (post initial db creation).

The main DBCA screen shows the following pages. We will choose Manage Pluggable Database

PDB12c 2013 08 20 17 46 20

Choose the CDB, Note you could have many CDBs on the same Node or RAC cluster

PDB12c 2013 08 30 17 51 59

We choose our PDB that we created in Part 1 of the blog

PDB12c 2013 08 30 17 52 39

Ahh..we gotta open the PDB first. As before:

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;

PDB12c 2013 08 30 17 54 22

Now we can Add support for and configure Database Vault. Additionally, Label Security can be configured.
It would have been nice to enable and modify Resource Manager as well other PDB tasks.
But I get the fact that this DBCA is really driven for the PDB operations (plug,unplug, create and destroy PDB).
Bulk of the PDB admin tasks are provided in EM

PDB12c 2013 08 30 18 14 54

Let’s do a new PDB creation for grins 🙂

PDB12c 2013 08 30 18 21 01

Specify the PDB name, storage location, and a default tablespace. Again, it would have been nice to specify a TEMP tablespace too, but that was left out

PDB12c 2013 08 30 18 22 26

Progress ….

PDB12c 2013 08 30 18 23 18

And Completion….Pretty Straightforward

PDB12c 2013 08 30 18 22 53


Once we have installed 12.1 Database Software, we can create the Container Database and the Pluggable Databases. In my case I did a software only install then manually executed DBCA

In this blog entry I’ll show the screens that walk-thru the configuration of the “first” database. I noticed that once DBCA is used to create the initial database, the capability and options (screens) for DBCA are different; i.e., it much more aligned to create/manage additional databases. I’ll show those screens in Part 3 of PDB

So let’s get started by executing
$ $ORACLE_HOME/bin/dbca

Rac01 2013 09 15 22 39 12

Choose Advanced mode for Policy Managed Database or use “Default Configuration”. Being a big promoter of Policy Managed Databases and since I have 4 RAC nodes (my best practice threshold to choose Policy Managed), I’ll choose, that.

Rac01 2013 09 15 22 39 44

I’ll pick a Global Database name and choose PDB option, and also option to choose how many PDBs to create (with prefix)

Rac01 2013 09 15 22 40 30

Pick a Server Pool name, I chose a cardinality of 2

Rac01 2013 09 15 22 40 58

Define the Management Options

Rac01 2013 09 15 22 41 20

Choose the Storage locations

Rac01 2013 09 15 22 43 31

Define Database Vault Owner and also the Separate Account Manager. Note the user name definitions

Rac01 2013 09 15 22 45 31

And now the finish

Rac01 2013 09 15 22 56 41


Steps to enable the bpdufilter on a Cisco 4948 Switch for outside connectivity for Exadata X2

By Nabil Nawaz, Viscosity NA.

We have an Exadata X2 system we are supporting at a managed hosted Datacenter facility that is being supported by me and our company. One fine day in the datacenter the Juniper switch which allows the Exadata system to communicate to the outside world stopped working. Eventually we found out the hosting facility enabled the bpdufilter on the Juniper switch and in turn we needed to do the same setup on out Cisco switch.

Below is a diagram of the highlevel layout of our setup in our datacenter.

Exadata_switch

  • The Exadata X2 Database Machine connects first to the Cisco 4948 Switch.
  • The Cisco switch connects to the Juniper Switch provided by the hosting facility.
  • Juniper Switch is the gateway to outside internet traffic.

  

A BPDU filter what is that?

Bridge Protocol Data Unit’s known also as BPDU’s play a fundamental part in a spanning-tree topology.

The Spanning Tree Protocol (STP) is a network protocol that ensures a loop-free topology for any bridged Ethernet local area network. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. Spanning tree also allows a network design to include spare (redundant) links to provide automatic backup paths if an active link fails, without the danger of bridge loops, or the need for manual enabling/disabling of these backup links.

BPDU’s are sent out by a switch to exchange information about bridge ID’s and costs of the root path. Exchanged at a frequency of every 2 seconds by default, BPDU’s allow switches to keep a track of network changes and when to block or forward ports to ensure a loop free topology. A BPDU filter disables spanning-tree which would result in the port to not participate in STP, and loops may occur.

For more information on Spanning Tree Protocol, please refer to the Wikipedia or Cisco documentation links below.

http://en.wikipedia.org/wiki/Spanning_Tree_Protocol

http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_55_se/configuration/guide/swstpopt.html#wp1002608

 

Commands to enable bpdu filter.

 

  • ·         Telnet to cisco switch

$ telnet IPADDRESS

  • ·         Enable commandline for switch

telnet> enable

 

  • ·         Prepare to configure switch.

ciscoswitch-ip# configure terminal

Enter configuration commands, one per line.  End with CNTL/Z.

ciscoswitch-ip(config)#interface GigabitEthernet1/48

ciscoswitch-ip(config-if)#

  • ·         Enable BPDU filter

ciscoswitch-ip(config-if)# spanning-tree bpdufilter enable

ciscoswitch-ip(config-if)# end

 

  • ·         Save the configuration to the startup configuration.

 

ciscoswitch-ip# copy running-config startup-config

Destination filename [startup-config]?

 

Building configuration…

Compressed configuration from 3889 bytes to 1546 bytes[OK]

ciscoswitch-ip#reload

Proceed with reload? [confirm]

Connection closed by foreign host

 

  • ·         Verify the configuration and BPDU filter is enabled.

ciscoswitch-ip# show running-config

ciscoswitch-ip# show interfaces status

ciscoswitch-ip# show spanning-tree interface GigabitEthernet1/48 portfast

interface GigabitEthernet1/48

media-type rj45

spanning-tree bpdufilter enable


Steps to change the password on a Cisco Switch

By Nabil Nawaz, Viscosity NA

These steps were used to change the password on a Cisco Switch on Exadata X2.

Telnet to cisco switch(IP Address of Switch) $ telnet <IPADDRESS>

·         Enable command line for switch

telnet> enable

·         Prepare to configure switch

ciscoswitch-ip# configure terminal

exapsw-ip(config)#line vty 0 15

exapsw-ip(config-line)#login

·         Change the password

exapsw-ip(config-line)#password newpassword

exapsw-ip(config-line)#login

exapsw-ip(config-line)#end

 

  • ·         Save the changes to the switch

exapsw-ip#write memory

Building configuration…

Compressed configuration from 4001 bytes to 1608 bytes[OK]

exapsw-ip#

 

  • ·         Try logging again to verify password change

 


Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from  "cradle to grave" of a PDB.  We'll use SQlPlus as the tool of choice, next time I'll show w/ DBCA


First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME	 VERSION	        STATUS	    CON_ID
---------------- ----------------- ------------ ----------
yoda		      12.1.0.1.0	   OPEN 		 0



The v$database view tells us that we are dealing with a CDB based database
 
CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB	CON_ID
--- ----------
YES	     0


or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, 'YES', 'Multitenant Option enabled', 'Regular 12c Database: ') "Multitenant Option ?" , OPEN_MODE, CON_ID from V$DATABASE;

NAME	  Multitenant Option ?	     OPEN_MODE	              CON_ID
--------- -------------------------- -------------------- ----------
YODA	  Multitenant Option enabled READ ONLY	                  0


There are alot of new views and tables to support PBD/CDB. But we'll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
 Name                            
 --------
 CON_ID                             
 DBID                                   
 CON_UID                              
 GUID                                   
 NAME                                   
 OPEN_MODE                             
 RESTRICTED                              
 OPEN_TIME                              
 CREATE_SCN                             
 TOTAL_SIZE     

CDB$ROOT@YODA> desc cdb_pdbs
 Name					  
 --------
 PDB_ID 				    
 PDB_NAME				    
 DBID					    
 CON_UID				 
 GUID						  
 STATUS 					  
 CREATION_SCN		
 CON_ID 				
                        

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name


CON_NAME
------------------------------
CDB$ROOT



CDB$ROOT@YODA> show con_id

CON_ID
------------------------------
1


Let's see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;


    CON_ID      DBID NAME                    	     TOTAL_SIZE
---------- ---------- ------------------------------ ----------
      2   4066465523 PDB$SEED                          283115520
      3    483260478 PDBOBI                                    0


CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;


    CON_ID NAME                   OPEN_MODE
---------- --------------------  ----------
      2    PDB$SEED                 READ ONLY
      3    PDBOBI           	    MOUNTED


Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process


Now lets's connect to the two different PDBs and see what they got!!  You really shouldn't ever connect to PDB$SEED, since its just used as a template, but we're just curious :-)

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.


CDB$ROOT@YODA> select name from v$datafile;


NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297


As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces


If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).  

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.


CDB$ROOT@YODA> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813



BTW, the datafiles listed in V$datafiles differs from cbd_data_files.  cdb_data_files only shows datafiles from "open" PDB, so just be careful if you're looking for correct datafile

Let's connect to our user PDB (pdbobi) and see what we can see :-)

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.


CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;


    CON_ID NAME                  OPEN_MODE
---------- -----------------   -----------
      3    PDBOBI                 MOUNTED


Place PDBOBI in Read Write mode.  Note, that when you create the PDB, it is initially in mounted mode with a status of NEW. 
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view


CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;



And let's create a new tablespace in this PDB


CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.


CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813


PDBOBI only has scope for its own PDB files.  We will illustrate this further down below.



Let's create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA');
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA')
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


CDB$ROOT@YODA> show con_name                     


CON_NAME
------------------------------
PDBOBI


Hmm…..remember we were still connected to PDBOBI.  You can only create PDBs from root (and not even from pdb$seed).  So connect to CDBROOT


CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA');


Pluggable database created.


CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME   STATUS
---------- -------------
PDBOBI	   NORMAL
PDB$SEED   NORMAL
PDBVADER   NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

    CON_ID	 DBID     NAME                     TOTAL_SIZE
---------- ---------- -------------          -------------
	 2 4066465523 PDB$SEED                      283115520
	 3  483260478 PDBOBI                        917504000
	 4  994649056 PDBVADER                              0


Hmm……the TOTAL_SIZE column shows 0 bytes.  Recall that all new PDBs are created and placed in MOUNTED stated 

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.



CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

    CON_ID	 DBID     NAME	             		     TOTAL_SIZE
---------- ---------- ------------------------------ ----------
	    4   994649056 PDBVADER			 		     393216000


Again, the scope of PDBVADER is to its own container files; it can't see PDBOBI files at all.  If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, "E46B24386A131109E043EDFE10AC6E89", in the OMF name is the GUID or Global Identifier for that PDB.  This is not the same as container unique identifier (CON_UID).  The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id


NAME                                                                                    CON_ID
----------------------------------------------------------------------------------- ----------
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155	                                             1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063                                                  1
+PDBDATA/YODA/DATAFILE/system.258.823892109                                                  1
+PDBDATA/YODA/DATAFILE/users.259.823892155                                                   1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297                 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297                 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813                3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813                  3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339                 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813                 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813                 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769                 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769                  4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769                4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769                 4


Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.


CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.


Just for completeness, I'll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we'll show later, but that we can create/clone a new PDB from a "gold-image PDB" .  That's real agility and a Database as a Service (DbaaS) play. 


So let's create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;


Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.


CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.


CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893


Notice that it just contains the basic files to enable a PDB.  The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.




Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database  pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!


You can also create a PDB as a snapshot (COW) from another PDB.  I'll post this test on the next blog report.  But essentially you'll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.