About admin

There cases where we need to ensure that large packet “address-ability” exists. This is needed to verify configuration for non standard packet sizes, i.e, MTU of 9000. For example if we are deploying a NAS or backup server across the network.

Setting the MTU can be done by editing the configuration script for the relevant interface in /etc/sysconfig/network-scripts/. In our example, we will use the eth1 interface, thus the file to edit would be ifcfg-eth1.

Add a line to specify the MTU, for example:
DEVICE=eth1
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.20.2
NETMASK=255.255.255.0
MTU=9000

Assuming that MTU is set on the system, just do a ifdown eth1 followed by ifup eth1.
An ifconfig eth1 will tell if its set correctly

eth1 Link encap:Ethernet HWaddr 00:0F:EA:94:xx:xx
inet addr:192.168.20.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
RX packets:141567 errors:0 dropped:0 overruns:0 frame:0
TX packets:141306 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:101087512 (96.4 MiB) TX bytes:32695783 (31.1 MiB)
Interrupt:18 Base address:0xc000

To validate end-2-end MTU 9000 packet management

Execute the following on Linux systems:

ping -M do -s 8972 [destinationIP]
For example: ping datadomain.viscosityna.com -s 8972

The reason for the 8972 on Linux/Unix system, the ICMP/ping implementation doesn’t encapsulate the 28 byte ICMP (8) + TCP (20) (ping + standard transmission control protocol packet) header. Therefore, take in account : 9000 and subtract 28 = 8972.

[root@racnode01]# ping -s 8972 -M do datadomain.viscosityna.com
PING datadomain.viscosityna.com. (192.168.20.32) 8972(9000) bytes of data.
8980 bytes from racnode1.viscosityna.com. (192.168.20.2): icmp_seq=0 ttl=64 time=0.914 ms

To illustrate if proper MTU packet address-ability is not in place, I can set a larger packet size in the ping (8993). The packet gets fragmented you will see
“Packet needs to be fragmented by DF set”. In this example, the ping command uses ” -s” to set the packet size, and “-M do” sets the Do Not Fragment

[root@racnode01]# ping -s 8993 -M do datadomain.viscosityna.com
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 0.859/0.955/1.167/0.109 ms, pipe 2
PING datadomain.viscosityna.com. (192.168.20.32) 8993(9001) bytes of data.
From racnode1.viscosityna.com. (192.168.20.2) icmp_seq=0 Frag needed and DF set (mtu = 9000)

By adjusting the packet size, you can figure out what the mtu for the link is. This will represent the lowest mtu allowed by any device in the path, e.g., the switch, source or target node, target or anything else inbetween.

Finally, another way to verify the correct usage of the MTU size is the command ‘netstat -a -i -n’ (the column MTU size should be 9000 when you are performing tests on Jumbo Frames)


We are about to apply 12102 PSU1 (19392646) on a Standalone Cluster. This patch applies the patch to RDBMS home as part of opatchauto.
Here’s the output in case anyone wants to compare:

[root@ol64afd OPatch]# ./opatchauto apply -analyze /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-03-29_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful
Command “/u01/app/oracle/product/12.1.0/database/OPatch/opatch version -oh /u01/app/oracle/product/12.1.0/database -invPtrLoc /u01/app/oracle/product/12.1.0/grid/oraInst.loc -v2c 12.1.0.1.5” execution failed:
bash: /u01/app/oracle/product/12.1.0/database/OPatch/opatch: No such file or directory

For more details, please refer to the log file “/u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-03-29_analyze.debug.log”.

Apply Summary:

Following patch(es) failed to be analyzed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604

opatchauto failed with error code 2.
[root@ol64afd OPatch]#

Need to copy 12.1.0.2.1 OPatch into Grid_HOME and DB_HOME. Now re-run

[root@ol64afd OPatch]# ./opatchauto apply -analyze /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-05-43_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful

Analyzing patch(es) on “/u01/app/oracle/product/12.1.0/database” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully analyzed on “/u01/app/oracle/product/12.1.0/database” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully analyzed on “/u01/app/oracle/product/12.1.0/database” for apply.

Analyzing patch(es) on “/u01/app/oracle/product/12.1.0/grid” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392590” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully analyzed on “/u01/app/oracle/product/12.1.0/grid” for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604

opatchauto succeeded.
[root@ol64afd OPatch]# ./opatchauto apply /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646 -ocmrf ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2014, Oracle Corporation. All rights reserved.

OPatchauto version : 12.1.0.1.5
OUI version : 12.1.0.2.0
Running from : /u01/app/oracle/product/12.1.0/grid

opatchauto log file: /u01/app/oracle/product/12.1.0/grid/cfgtoollogs/opatchauto/19392646/opatch_gi_2014-10-30_16-07-28_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/oracle/product/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/database

Configuration Validation: Successful

Patch Location: /mnt/hgfs/12cGridSoftware/12102-PSU1/19392646
Grid Infrastructure Patch(es): 19303936 19392590 19392604
RAC Patch(es): 19303936 19392604

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0/database) … Successful
Following database(s) and/or service(s) were stopped and will be restarted later during the session: yoda

Applying patch(es) to “/u01/app/oracle/product/12.1.0/database” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully applied to “/u01/app/oracle/product/12.1.0/database”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully applied to “/u01/app/oracle/product/12.1.0/database”.

Stopping CRS … Successful

Applying patch(es) to “/u01/app/oracle/product/12.1.0/grid” …
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19303936” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392590” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.
Patch “/mnt/hgfs/12cGridSoftware/12102-PSU1/19392646/19392604” successfully applied to “/u01/app/oracle/product/12.1.0/grid”.

Starting CRS … Successful

Starting RAC (/u01/app/oracle/product/12.1.0/database) … Successful

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/oracle/product/12.1.0/grid: 19303936, 19392590, 19392604
RAC Home: /u01/app/oracle/product/12.1.0/database: 19303936, 19392604


With busy weeks of IOUG and other conferences coming up, we have little time to blog…. So, in the coming weeks, I’m just going to do some “baby” blogs; i.e., some quick tips and new features

Here’s a new 12c new feature that simplifies snapshotting databases

Snapshot Optimized Recovery

There’s many of you that take snapshot copies of database, either via server-side snapshot tools or using storage level snapshots. Usually this required a cold database or putting the database in hot-backup mode. However, there are downsides to both options

In Oracle 12c, third-party snapshots technologies that meet the following requirements can be taken without requiring the database to be placed in backup mode:

Database is crash consistent at the point of the snapshot.
Write ordering is preserved for each file within a snapshot.
Snapshot stores the time at which a snapshot is completed.

The new RECOVER SNAPSHOT TIME command is introduced to recover a snapshot to a consistent point, without any additional manual procedures for point-in-time recovery needs.
This command performs the recovery in a single step. Recovery can be either to the current time or to a point in time after the snapshot was taken

Though there is a bit upfront overhead; e.g.,additional redo logging and a complete database checkpoint.


My new favorite 12c Oracle Clusterware command is the 'crsctl stat res "resource name" -dependency'

What this command does, is to provide a dependency tree structure for resource the in question.  This will display startup (default) and shutdown dependencies.  

From this we can understand the pull-up, pushdown, weak, and hard dependencies between clusterware resources 


[oracle@rac02 ~]$ crsctl stat res ora.dagobah.db -dependency
================================================================================
Resource Start Dependencies
================================================================================
---------------------------------ora.dagobah.db---------------------------------
ora.dagobah.db(ora.database.type)->
| type:ora.listener.type[weak:type]
| | type:ora.cluster_vip_net1.type[hard:type,pullup:type]
| | | ora.net1.network(ora.network.type)[hard,pullup]
| | | ora.gns<Resource not found>[weak:global]
| type:ora.scan_listener.type[weak:type:global]
| | ora.scan1.vip(ora.scan_vip.type)[hard,pullup]
| | | ora.net1.network(ora.network.type)[hard,pullup:global]
| | | ora.gns<Resource not found>[weak:global]
| | | type:ora.scan_vip.type[dispersion:type:active]
| | type:ora.scan_listener.type[dispersion:type:active]
| ora.ons(ora.ons.type)[weak:uniform]
| | ora.net1.network(ora.network.type)[hard,pullup]
| ora.gns<Resource not found>[weak:global]
| ora.PDBDATA.dg(ora.diskgroup.type)[weak:global:uniform]
| | ora.asm(ora.asm.type)[hard,pullup:always]
| | | ora.LISTENER.lsnr(ora.listener.type)[weak]
| | | | type:ora.cluster_vip_net1.type[hard:type,pullup:type]
| | | | | ora.net1.network(ora.network.type)[hard,pullup]
| | | | | ora.gns<Resource not found>[weak:global]
| | | ora.ASMNET1LSNR_ASM.lsnr(ora.asm_listener.type)[hard,pullup]
| | | | ora.gns<Resource not found>[weak:global]
| ora.FRA.dg(ora.diskgroup.type)[hard:global:uniform,pullup:global]
| | ora.asm(ora.asm.type)[hard,pullup:always]
| | | ora.LISTENER.lsnr(ora.listener.type)[weak]
| | | | type:ora.cluster_vip_net1.type[hard:type,pullup:type]
| | | | | ora.net1.network(ora.network.type)[hard,pullup]
| | | | | ora.gns<Resource not found>[weak:global]
| | | ora.ASMNET1LSNR_ASM.lsnr(ora.asm_listener.type)[hard,pullup]
| | | | ora.gns<Resource not found>[weak:global]
--------------------------------------------------------------------------------

Now the same for shutdown (pushdown) dependencies

[oracle@rac02 ~]$ crsctl stat res ora.dagobah.db -dependency -stop
================================================================================
Resource Stop Dependencies
================================================================================
---------------------------------ora.dagobah.db---------------------------------
ora.dagobah.db(ora.database.type)->
| ora.dagobah.hoth.svc(ora.service.type)[hard:intermediate]
| ora.dagobah.r2d2.svc(ora.service.type)[hard:intermediate]
--------------------------------------------------------------------------------

Why is this command and output important?  Well, in cases where a particular resource doesn't come up, you may want to understand relationship with its dependents
The reason is, if you are creating your own resource dependencies using the CRS API (formally known as CLSCRS API).

<pre>CLSCRS is a set of C-based APIs for Oracle Clusterware. The CLSCRS APIs enable you to manage the operation of entities that are managed by Oracle Clusterware. These entities include resources, resource types, servers, and server pools. You can use the APIs to register user applications with Oracle Clusterware so that the clusterware can manage them and maintain high availability. Once an application is registered, you can manage, monitor and query the application's status.  The APIs allow you to use the callbacks for diagnostic logging.

</pre>

In this blog I want to illustrate the benefits of deploying PDB with RAC Services.  Although the key ingredient is the Service, RAC provides the final mile for scalability and availability.  In my mind I would not implement PDB w/o RAC

Anyways here we go…

The goal is to illustrate that Database [RAC] Services integration with PDBs provides seamless management and availability.

Initially, we have only the PDB$SEED. 

SQL> select * from v$pdbs;


CON_ID       DBID     CON_UID GUID                             NAME                           OPEN_MODE  RES OPEN_TIME                                                              CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ---------- --- --------------------------------------------------------------------------- ---------- ----------
2 4080865680 4080865680 F13EFFD958E24857E0430B2910ACF6FD PDB$SEED                       READ ONLY  NO  17-FEB-14 01.01.13.909 PM                                                 1720768  283115520

Let's create a PDB from the SEED (I have shown this from an earlier Blog post)

SQL> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.

Now we have the new PDB listed.

SQL> select * from v$pdbs;

CON_ID     DBID       CON_UID     GUID                            NAME                           OPEN_MODE  RES OPEN_TIME                                                              CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ------------------------------ ----------- ----------------------------------------------------------------------- ---------- ----------
2          4080865680 4080865680 F13EFFD958E24857E0430B2910ACF6FD PDB$SEED                       READ ONLY  NO  17-FEB-14 01.01.13.909 PM                                                 1720768  283115520
         3 3403102439 3403102439 F2A023F791663F8DE0430B2910AC37F7 PDBHANSOLO                     MOUNTED        17-FEB-14 01.27.08.942 PM                                                 1846849          0

But notice that its in "MOUNTED" status. Even if I restart the whole CDB, the new PDB will not come up in OPEN READ WRITE mode. If we want to have the
PDB available on startup. Here's how we go about resolving it.

When we create or plug in a new PDB, a default Service gets created, as with previous versions, it is highly recommended not to connect to Service. Oracle took this one step forward and forced users to create a user generated Service. So let's associate a user Service with that PDB. Notice that there's a "-pdb" flag in the add service command.

$ srvctl add service -d dagobah -s hoth -pdb pdbhansolo


[oracle@rac02 ~]$ srvctl config service -d dagobah -verbose
Service name: Hoth
Service is enabled
Server pool: Dagobah
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
----> Pluggable database name: pdbhansolo
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
Preferred instances: Dagobah_1
Available instances: 

And the Service is registered with the listener

     
[oracle@rac02 ~]$ lsnrctl stat

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 17-FEB-2014 13:34:41

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                17-FEB-2014 12:59:46
Uptime                    0 days 0 hr. 34 min. 54 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/rac02/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.11)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.21)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=rac02.viscosityna.com)(PORT=5500))(Security=(my_wallet_directory=/u02/app/oracle/product/12.1.0/db/admin/Dagobah/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM3", status READY, has 2 handler(s) for this service...
Service "Dagobah" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
Service "DagobahXDB" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
---->Service "Hoth" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
 Service "pdbhansolo" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
Service "r2d2" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now let's test this. I close the PDB and I also stopped the CDB (probably not necessary, but what the heck :-))

SQL> alter session set container=cdb$root;

Session altered.

SQL> alter pluggable database pdbhansolo close;

[oracle@rac02 ~]$ srvctl stop database -d dagobah

[oracle@rac02 ~]$ lsnrctl stat

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 18-FEB-2014 15:36:49

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                18-FEB-2014 12:57:30
Uptime                    0 days 2 hr. 39 min. 19 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/rac02/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.11)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.21)(PORT=1521)))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX3", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM3", status READY, has 2 handler(s) for this service...
The command completed successfully


[oracle@rac02 ~]$ srvctl start database -d dagobah

[oracle@rac02 ~]$ lsnrctl stat

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 18-FEB-2014 15:37:39

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.1.0 - Production
Start Date                18-FEB-2014 12:57:30
Uptime                    0 days 2 hr. 40 min. 9 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/rac02/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.11)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.41.21)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=rac02.viscosityna.com)(PORT=5500))(Security=(my_wallet_directory=/u02/app/oracle/product/12.1.0/db/admin/Dagobah/xdb_wallet))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "+APX" has 1 instance(s).
  Instance "+APX3", status READY, has 1 handler(s) for this service...
Service "+ASM" has 1 instance(s).
  Instance "+ASM3", status READY, has 2 handler(s) for this service...
Service "Dagobah" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
Service "DagobahXDB" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
---->Service "Hoth" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
 Service "pdbhansolo" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...</strong>
Service "r2d2" has 1 instance(s).
  Instance "Dagobah_1", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac02 ~]$

SQL> select NAME,OPEN_MODE from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
--> PDBHANSOLO                     READ WRITE

Now I can connect to this PDB using my lovely EZConnect string

sqlplus hansolo/hansolo@rac02/hoth

So let's see this relationship between the service (Hoth) and the PLuggable Database (pdbhansolo)

$crsctl stat res ora.dagobah.hoth.svc -p

<strong>NAME=ora.dagobah.hoth.svc</strong>
TYPE=ora.service.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
….
….
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Service resource
GEN_SERVICE_NAME=Hoth
GLOBAL=false
GSM_FLAGS=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MANAGEMENT_POLICY=AUTOMATIC
MAX_LAG_TIME=ANY
MODIFY_TIMEOUT=60
NLS_LANG=
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
<strong>PLUGGABLE_DATABASE=pdbhansolo
</strong>PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1

The key here is that RAC Service of PDBHANSOLO (non-default) becomes an important aspect of PDB auto-startup. Without the use the non-default service the PDB does not open 'read write' automatically. So where does RAC fit in here. Well if I have say a 6 node RAC cluster, I can have some PDSB-Services not started on certain nodes, this effectively prevents access to those PDB from certain nodes, thus I can have a certain pre-defined workload distribution. That's a topic for my next Blog and Oracle Users Group presentation 🙂