Consolidate where possible …Isolate where necessary

In the last blog I mentioned the benefits of schema consolidation and how it dove tails directly into a 12c Oracle Database PDB implementation.
In this part 2 of the PDB blog, we will get a little more detailed and do a basic walk-through, from  "cradle to grave" of a PDB.  We'll use SQlPlus as the tool of choice, next time I'll show w/ DBCA


First verify that we are truly on 12c Oracle database

SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME	 VERSION	        STATUS	    CON_ID
---------------- ----------------- ------------ ----------
yoda		      12.1.0.1.0	   OPEN 		 0



The v$database view tells us that we are dealing with a CDB based database
 
CDB$ROOT@YODA> select cdb, con_id from v$database;

CDB	CON_ID
--- ----------
YES	     0


or a more elegant way:

CDB$ROOT@YODA> select NAME, DECODE(CDB, 'YES', 'Multitenant Option enabled', 'Regular 12c Database: ') "Multitenant Option ?" , OPEN_MODE, CON_ID from V$DATABASE;

NAME	  Multitenant Option ?	     OPEN_MODE	              CON_ID
--------- -------------------------- -------------------- ----------
YODA	  Multitenant Option enabled READ ONLY	                  0


There are alot of new views and tables to support PBD/CDB. But we'll focus on the v$PDBS and CDB_PDBS views:

CDB$ROOT@YODA> desc v$pdbs
 Name                            
 --------
 CON_ID                             
 DBID                                   
 CON_UID                              
 GUID                                   
 NAME                                   
 OPEN_MODE                             
 RESTRICTED                              
 OPEN_TIME                              
 CREATE_SCN                             
 TOTAL_SIZE     

CDB$ROOT@YODA> desc cdb_pdbs
 Name					  
 --------
 PDB_ID 				    
 PDB_NAME				    
 DBID					    
 CON_UID				 
 GUID						  
 STATUS 					  
 CREATION_SCN		
 CON_ID 				
                        

The SQlPlus command con_name (container name) shows the container and the con_id we are connected to:

CDB$ROOT@YODA> show con_name


CON_NAME
------------------------------
CDB$ROOT



CDB$ROOT@YODA> show con_id

CON_ID
------------------------------
1


Let's see what PDBs that are created in this CDB and their current state:

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;


    CON_ID      DBID NAME                    	     TOTAL_SIZE
---------- ---------- ------------------------------ ----------
      2   4066465523 PDB$SEED                          283115520
      3    483260478 PDBOBI                                    0


CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;


    CON_ID NAME                   OPEN_MODE
---------- --------------------  ----------
      2    PDB$SEED                 READ ONLY
      3    PDBOBI           	    MOUNTED


Recall from part 1 of the blog series, that we created a PDB (pdbobi) when we specified the Pluggable Database Feature on install, and that a PDB$SEED got created as part of that Install process


Now lets's connect to the two different PDBs and see what they got!!  You really shouldn't ever connect to PDB$SEED, since its just used as a template, but we're just curious :-)

CDB$ROOT@YODA> alter session set container=PDB$SEED;
Session altered.


CDB$ROOT@YODA> select name from v$datafile;


NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297


As you can see that PDB$SEED houses the template tablespaces -> System, Sysaux, and Undo tablespaces


If we connect back to the root-CDB, we see that it houses essentially the traditional database tablespaces (like in pre-12c days).  

CDB$ROOT@YODA> alter session set container=cdb$root;
Session altered.


CDB$ROOT@YODA> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DATAFILE/system.258.823892109
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297
+PDBDATA/YODA/DATAFILE/users.259.823892155
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813



BTW, the datafiles listed in V$datafiles differs from cbd_data_files.  cdb_data_files only shows datafiles from "open" PDB, so just be careful if you're looking for correct datafile

Let's connect to our user PDB (pdbobi) and see what we can see :-)

CDB$ROOT@YODA> alter session set container=pdbobi;
Session altered.


CDB$ROOT@YODA> select con_id, name, open_mode from v$pdbs;


    CON_ID NAME                  OPEN_MODE
---------- -----------------   -----------
      3    PDBOBI                 MOUNTED


Place PDBOBI in Read Write mode.  Note, that when you create the PDB, it is initially in mounted mode with a status of NEW. 
View the OPEN MODE status of a PDB by querying the OPEN_MODE column in the V$PDBS view or view the status of a PDB by querying the STATUS column of the CDB_PDBS or DBA_PDBS view


CDB$ROOT@YODA> alter pluggable database pdbobi open;

Pluggable database altered.

or CDB$ROOT@YODA> alter pluggable database all open;



And let's create a new tablespace in this PDB


CDB$ROOT@YODA> create tablespace obiwan datafile size 500M;

Tablespace created.


CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813


PDBOBI only has scope for its own PDB files.  We will illustrate this further down below.



Let's create a new clone from an existing PDB, but with a new path

CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA');
create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA')
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


CDB$ROOT@YODA> show con_name                     


CON_NAME
------------------------------
PDBOBI


Hmm…..remember we were still connected to PDBOBI.  You can only create PDBs from root (and not even from pdb$seed).  So connect to CDBROOT


CDB$ROOT@YODA> create pluggable database PDBvader from PDBOBI FILE_NAME_CONVERT=('+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE','+PDBDATA');


Pluggable database created.


CDB$ROOT@YODA> select pdb_name, status from cdb_pdbs;

PDB_NAME   STATUS
---------- -------------
PDBOBI	   NORMAL
PDB$SEED   NORMAL
PDBVADER   NORMAL

And

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

    CON_ID	 DBID     NAME                     TOTAL_SIZE
---------- ---------- -------------          -------------
	 2 4066465523 PDB$SEED                      283115520
	 3  483260478 PDBOBI                        917504000
	 4  994649056 PDBVADER                              0


Hmm……the TOTAL_SIZE column shows 0 bytes.  Recall that all new PDBs are created and placed in MOUNTED stated 

CDB$ROOT@YODA> alter session set container=pdbvader;

Session altered.

CDB$ROOT@YODA> alter pluggable database open;

Pluggable database altered.



CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769

Viola…. size is now reflected !!

CDB$ROOT@YODA> select CON_ID,DBID,NAME,TOTAL_SIZE from v$pdbs;

    CON_ID	 DBID     NAME	             		     TOTAL_SIZE
---------- ---------- ------------------------------ ----------
	    4   994649056 PDBVADER			 		     393216000


Again, the scope of PDBVADER is to its own container files; it can't see PDBOBI files at all.  If we connect back to cdb$root and look at v$datafile, we see that cdb$root has scope for all the datafiles in the CDB database

Incidentally, that long identifier, "E46B24386A131109E043EDFE10AC6E89", in the OMF name is the GUID or Global Identifier for that PDB.  This is not the same as container unique identifier (CON_UID).  The con_uid is a local
identifier; whereas the GUID is universal. Keep in mind that we can unplug a PDB from one CDB into another CDB, so the GUID provides this uniqueness and streamlines portability.

CDB$ROOT@YODA> select name, con_id from v$datafile order by con_id


NAME                                                                                    CON_ID
----------------------------------------------------------------------------------- ----------
+PDBDATA/YODA/DATAFILE/undotbs1.260.823892155	                                             1
+PDBDATA/YODA/DATAFILE/sysaux.257.823892063                                                  1
+PDBDATA/YODA/DATAFILE/system.258.823892109                                                  1
+PDBDATA/YODA/DATAFILE/users.259.823892155                                                   1
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297                 2
+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297                 2
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813                3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813                  3
+PDBDATA/YODA/E456D87DF75E6553E043EDFE10AC71EA/DATAFILE/obiwan.284.824683339                 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813                 3
+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813                 3
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/sysaux.279.823980769                 4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/users.281.823980769                  4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/example.282.823980769                4
+PDBDATA/YODA/E46B24386A131109E043EDFE10AC6E89/DATAFILE/system.280.823980769                 4


Now that we are done testing with PDBVADER PDB, we can shutdown and drop this PDB

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;
drop pluggable database pdbvader including datafiles
*
ERROR at line 1:
ORA-65025: Pluggable database PDBVADER is not closed on all instances.


CDB$ROOT@YODA> alter pluggable database pdbvader close;

Pluggable database altered.

CDB$ROOT@YODA> drop pluggable database pdbvader including datafiles;

Pluggable database dropped.


Just for completeness, I'll illustrate couple different ways to create a PDB

The beauty of PDB is not mobility (plug and unplug), which we'll show later, but that we can create/clone a new PDB from a "gold-image PDB" .  That's real agility and a Database as a Service (DbaaS) play. 


So let's create a new PDB in a couple of different ways.

Method #1: Create a PDB from SEED
CDB$ROOT@YODA> alter session set container=cdb$root;


Session altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbhansolo admin user hansolo identified by hansolo roles=(dba);

Pluggable database created.


CDB$ROOT@YODA> alter pluggable database pdbhansolo open;

Pluggable database altered.


CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/system.280.824693889
+PDBDATA/YODA/E51109E2AF22127AE043EDFE10AC1DD9/DATAFILE/sysaux.279.824693893


Notice that it just contains the basic files to enable a PDB.  The CDB will copy from the PDB$SEED the System and Sysaux tablesapces and instantiate them in the new PDB.




Method #2: Clone from an existing PDB (PDBOBI in our case)

CDB$ROOT@YODA> alter session set container=cdb$root;

Session altered.

CDB$ROOT@YODA> alter pluggable database pdbobi close;

Pluggable database altered.

CDB$ROOT@YODA> alter pluggable database pdbobi open read only;

Pluggable database altered.

CDB$ROOT@YODA> CREATE PLUGGABLE DATABASE pdbleia from pdbobi;

Pluggable database created.

CDB$ROOT@YODA> alter pluggable database  pdbleia open;

Pluggable database altered.

CDB$ROOT@YODA> select file_name from cdb_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/system.281.824694649
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/sysaux.282.824694651
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/users.285.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/example.286.824694661
+PDBDATA/YODA/E51109E2AF23127AE043EDFE10AC1DD9/DATAFILE/obiwan.287.824694669

Notice, that the OBI tablespace that we created in PDBOBI came over as part of this Clone process!!


You can also create a PDB as a snapshot (COW) from another PDB.  I'll post this test on the next blog report.  But essentially you'll need a NAS Appliannce, or any technology that will provide you with COW snapshot.
I plan on using ACFS as the storage container and ACFS RW Snapshot for the snapshot PDB.




Book Title:
Successfully Virtualize Business Critical Oracle Databases

VMware iBook Cover

Here’s the book Description:
Written by VMware vExperts (Charles Kim (VCP), Viscosity North America, and George Trujillo (VCI), HortonWorks) and leading experts within VMware VCI and Learning Specialist (Steve Jones) and Chief Database Architect in EMC’s Global IT organization (Darryl Smith), this book will provide critical instructions for deploying Oracle Standalone and Real Application Cluster (RAC) on VMware enterprise virtualized platforms. You will learn how to setup an Oracle ecosystem within a virtualized infrastructure for enterprise deployments. We will share industry best practices to install and configure Linux, and to deploy Oracle Grid Infrastructure and Databases in a matter of hours. Whether you are deploying a 4 node RAC cluster or deploying a single standalone database, we will lead you to the foundation which will allow you to rapidly provision database-as-a-service. We will disseminate key details on creating golden image templates from the virtual machine to the Oracle binaries and databases. You will learn from industry experts how to troubleshoot production Oracle database servers running in VMware virtual infrastructures.

Audience:
Database Admins/Architects, VMware Admins, System Admins/Architects, Project Architects
This book designed for Oracle DBAs and VMware administrators needing to learn the art of virtualizing Oracle.


Many of you have probably have heard me speak over the years (at OOW, local user groups and at the local bars) about the virtues of simplification, rationalization, and consolidation. I mentioned the different database consolidation and multi-tenancy models: Virtualization based, Database Instance and Schema consolidation.

The following papers I wrote [when I was at Oracle] touch in detail on this topic –
http://www.oracle.com/technetwork/database/database-cloud/database-cons-best-practices-1561461.pdf

And here’s a more current version of that paper., updated for 12c and PDB.
http://www.oracle.com/us/products/database/database-private-cloud-wp-360048.pdf

For those who have done consolidation via Virtualization platforms such as VMWare or OVM know its fairly straightforward and its a simple “drag and drop”, as I say. Similarly consolidation of many databases as separate database instances on platform is also fairly straightforward. Its the consolidation of many disparate schemas into a common database that makes things interesting. Couple of key points on “why schema consolidation” from the paper:

  • The schema consolidation model has consistently provided the most opportunities for reducing operating expenses, since you only have a single big database to maintain,monitor, mange and maintain.
  • Though schema consolidation allows the best ROI (w.r.t CapEX/OPex), you are sacrificing flexibility for compaction. As I’ve stated in my presentations and papers, “…consolidation and isolation move in opposite directions” The more you consolidate the less capabilities you’ll have for isolation; in contrast, the more you try to isolate, the more you sacrifice benefits of consolidation.
  • Custom (home-grown) apps have been best fit use cases for schema consolidation, since application owners and developers have more control on how the application and schema is built.

Well, with the 12c Oracle Database feature Pluggable Database (PDB) , you now have more incentive to lean towards the schema consolidation. PDB “begins” to eliminate the typical issues that come with schema consolidation; such as namespace collisions, security, granularity of recovery.

In this 1st part of the three part series on PDB, I’ll illustrate the installation of the 12c Database with Pluggable Database feature. The next upcoming parts of the series will cover management and user isolation (security) with PDB.

But first a very, very high-level primer on terminology:

  • Root Container Database – Or the root CDB (cdb$root) is the real database (if you will), and the name you give it will be name of the instance. The CDB database owns the SGA and running processes. I can have many CDBs on the same database server (each with its own PDBs). But the cool thing is that you can have a more than one CDB, allowing DBAs to have a Database Instance consolidation model coupled a schema consolidation. For best scalability, mix in RAC and leverage all the benefits of RAC Services, QoS, and Workload Distribution. The seed PDB (PDB$SEED) is a Oracle supplied system template that the CDB can use to create new PDBs. The seed PDB is named PDB$SEED. One cannot add or modify objects in PDB$SEED.
  • Pluggable Database – The PDB databases are sub-containers that serviced by CDB resources. The true beauty of the PDB is its mobility; i.e., I can unplug and plug 12c databases into and out of CDBs. I can “create like” new PDBs from existing PDB, like full snapshots.

So, now I’ll illustrate the important/interesting and new screens of 12c Database Installer:

PDB12c 2013 08 19 17 22 42

We chose Server Class

PDB12c 2013 08 19 17 23 09

It will single instance ..for now 🙂

PDB12c 2013 08 19 17 23 37

Choose Advanced Install

PDB12c 2013 08 19 17 24 07

And now for the fun step. We choose a Enterprise Edition, as Pluggable Database feature is only available in EE

PDB12c 2013 08 19 17 24 47

The next couple of screens ask about the Oracle Home and Oracle Base location, nothing new, but look at screen for Step 11. This where the fun is. We specify the Database name, but also specify if we want to create a Container Database. If we check it, it allows us to create our first PDB database in the Container Database (CDB). In my example I speficied Yoda as my CDB name and (in keeping with Star Wars theme) I said PDB is PDBOBI

PDB12c 2013 08 19 17 27 19

We obviously choose ASM as the storage location

PDB12c 2013 08 19 17 28 18

And we have the opportunity to register EM Cloud Control this new target database.

PDB12c 2013 08 20 17 46 20

The rest of the steps/screens are standard stuff, so I won’t bore you with it. But here’s an excerpt from the database alert that shows magic underneath:

create pluggable database PDB$SEED as clone  using '/u02/app/oracle/product/12.1.0/dbhome_1/assistants/dbca/templates//pdbseed.xml'  source_file_name_convert = ('/ade/b/3593327372/oracle/oradata/seeddata/pdbseed/temp01.dbf','+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/pdbseed_temp01.dbf',
'/ade/b/3593327372/oracle/oradata/seeddata/pdbseed/system01.dbf','+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.271.823892297',
'/ade/b/3593327372/oracle/oradata/seeddata/pdbseed/sysaux01.dbf','+PDBDATA/YODA/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.270.823892297') file_name_convert=NONE  NOCOPY
Mon Aug 19 18:58:59 2013
….
…. 
Post plug operations are now complete.
Pluggable database PDB$SEED with pdb id - 2 is now marked as NEW.


create pluggable database pdbobi as clone  using '/u02/app/oracle/product/12.1.0/dbhome_1/assistants/dbca/templates//sampleschema.xml'  source_file_name_convert = ('/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/temp01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/pdbobi_temp01.dbf',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/example01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/system01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/SAMPLE_SCHEMA_users01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/sysaux01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813') file_name_convert=NONE  NOCOPY
Mon Aug 19 19:07:42 2013
….
….
****************************************************************
Post plug operations are now complete.
Pluggable database PDBOBI with pdb id - 3 is now marked as NEW.
****************************************************************
Completed: create pluggable database pdbobi as clone  using '/u02/app/oracle/product/12.1.0/dbhome_1/assistants/dbca/templates//sampleschema.xml'  source_file_name_convert = ('/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/temp01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/pdbobi_temp01.dbf',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/example01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/example.275.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/system01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/system.276.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/SAMPLE_SCHEMA_users01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/users.277.823892813',
'/ade/b/3593327372/oracle/oradata/seeddata/SAMPLE_SCHEMA/sysaux01.dbf','+PDBDATA/YODA/DD7D8C1D4C234B38E04325AAE80AF577/DATAFILE/sysaux.274.823892813') file_name_convert=NONE  NOCOPY
alter pluggable database pdbobi open restricted
Pluggable database PDBOBI dictionary check beginning
Pluggable Database PDBOBI Dictionary check complete
Database Characterset is US7ASCII
….
….

XDB installed.

XDB initialized.
Mon Aug 19 19:08:01 2013
Pluggable database PDBOBI opened read write
Completed: alter pluggable database pdbobi open restricted

I will cover more of PDB creation and management in the next blog. But I’ll leave you with this teaser of DBCA screen:

PDB12c 2013 08 20 17 46 20


I generally don’t get time to play with single instance ASM since I in live the RAC world so much. But I needed to quickly create a 12c PDB configuration over ASM.
If you recall from the 11gR2 tho is straightforward install and configuration of Grid Infrustructure. However, there are cases where the install/config doesn’t go smoothly. This exactly what happened in my case. Not sure if was an user error or just a 12c bug (the code is barely in the field) or combination of the two.

In any case, what this blog is going to touch on is how you recover, or rather escape from a mal-configured/messed up 12c Grid Infrastructure for Standalone Cluster install.

Since everybody loves logs and traces, I’ll walk through some of the issues. First off the, installation of the software went w/o out a hitch, its the scarey-tentative “root.sh” that went wacko

Here’s the error message from GI alert log:

OHASD starting
Timed out waiting for init.ohasd script to start; posting an alert
OHASD exiting; Could not init OLR
OHASD stderr redirected to ohasdOUT.log

Here’s the trace info from ohasd.log:

2013-08-16 13:10:47.347: [ default][357553728] OHASD Daemon Starting. Command string :reboot
2013-08-16 13:10:47.347: [ default][357553728] OHASD params []
2013-08-16 13:10:47.662: [ default][357553728]
2013-08-16 13:10:47.662: [ default][357553728] Initializing OLR
2013-08-16 13:10:47.662: [ default][357553728]proa_init: OLR Abstraction layer initialization. Bootlevel:[1]
2013-08-16 13:10:47.670: [  OCRAPI][357553728]a_init: Successfully initialized the patch management context.
2013-08-16 13:10:47.670: [  OCRAPI][357553728]a_init: Successfully initialized the OLR specific states.
2013-08-16 13:10:47.670: [  OCRAPI][357553728]a_init:13: Clusterware init successful
2013-08-16 13:10:47.670: [  OCRAPI][357553728]a_init:15: Successfully initialized the Cache layer.
2013-08-16 13:10:47.670: [  OCRRAW][357553728]proprioo: opening OCR device(s)
2013-08-16 13:10:47.670: [  OCRRAW][357553728]proprioo: Successfully opened the non-ASM locations if configured.
2013-08-16 13:10:47.670: [  OCRRAW][357553728]proprioo: for disk 0 (/u01/app/oracle/product/12.1.0/grid/cdata/localhost/pdb12c.olr), id match (1), total id sets, (1) need recover (0), my votes (0), total votes (0), commit_lsn (1), lsn (1)
2013-08-16 13:10:47.670: [  OCRRAW][357553728]proprioo: my id set: (799232119, 1028247821, 0, 0, 0)
2013-08-16 13:10:47.671: [  OCRRAW][357553728]proprioo: 1st set: (799232119, 1028247821, 0, 0, 0)
2013-08-16 13:10:47.671: [  OCRRAW][357553728]proprioo: 2nd set: (0, 0, 0, 0, 0)
2013-08-16 13:10:47.671: [  OCRRAW][357553728]proprinit: Successfully initialized the I/O module (proprioini).
2013-08-16 13:10:47.671: [  OCRRAW][357553728]proprinit: Successfully initialized the backend handle (propribctx).
2013-08-16 13:10:47.671: [  OCRAPI][357553728]proa_init: Successfully initialized the Storage Layer.
2013-08-16 13:10:47.674: [  OCRAPI][357553728]proa_init: Successfully initlaized the Messaging Layer.

<---- everything okay this point

2013-08-16 13:10:47.698: [  OCRAPI][357553728]a_init:18!: Thread init unsuccessful : [24]
2013-08-16 13:10:47.742: [  CRSOCR][357553728] OCR context init failure.  Error: PROCL-24: Error in the messaging layer Messaging error [gipcretFail] [1]
2013-08-16 13:10:47.743: [ default][357553728] Created alert : (:OHAS00106:) :  OLR initialization failed, error: PROCL-24: Error in the messaging layer Messaging error [gipcretFail] [1]
2013-08-16 13:10:47.743: [ default][357553728][PANIC] OHASD exiting; Could not init OLR
2013-08-16 13:10:47.743: [ default][357553728] Done.

2013-08-16 13:27:35.715: [ default][2626647616] Created alert : (:OHAS00117:) :  TIMED OUT WAITING FOR OHASD MONITOR
2013-08-16 13:27:35.716: [ default][2626647616] OHASD Daemon Starting. Command string :reboot
2013-08-16 13:27:35.716: [ default][2626647616] OHASD params []
2013-08-16 13:27:35.717: [ default][2626647616]
2013-08-16 13:27:35.717: [ default][2626647616] Initializing OLR
2013-08-16 13:27:35.717: [ default][2626647616]proa_init: OLR Abstraction layer initialization. Bootlevel:[1]
2013-08-16 13:27:35.724: [  OCRAPI][2626647616]a_init: Successfully initialized the patch management context.

2013-08-16 13:27:35.724: [  OCRAPI][2626647616]a_init: Successfully initialized the OLR specific states.
2013-08-16 13:27:35.724: [  OCRAPI][2626647616]a_init:13: Clusterware init successful
2013-08-16 13:27:35.724: [  OCRAPI][2626647616]a_init:15: Successfully initialized the Cache layer.
2013-08-16 13:27:35.724: [  OCRRAW][2626647616]proprioo: opening OCR device(s)
2013-08-16 13:27:35.724: [  OCRRAW][2626647616]proprioo: Successfully opened the non-ASM locations if configured.
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprioo: for disk 0 (/u01/app/oracle/product/12.1.0/grid/cdata/localhost/pdb12c.olr), id match (1), total id sets, (1) need recover (0), my votes (0), total votes (0), commit_lsn (1), lsn (1)
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprioo: my id set: (799232119, 1028247821, 0, 0, 0)
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprioo: 1st set: (799232119, 1028247821, 0, 0, 0)
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprioo: 2nd set: (0, 0, 0, 0, 0)
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprinit: Successfully initialized the I/O module (proprioini).
2013-08-16 13:27:35.725: [  OCRRAW][2626647616]proprinit: Successfully initialized the backend handle (propribctx).
2013-08-16 13:27:35.725: [  OCRAPI][2626647616]proa_init: Successfully initialized the Storage Layer.
2013-08-16 13:27:35.726: [  OCRAPI][2626647616]proa_init: Successfully initlaized the Messaging Layer.
2013-08-16 13:27:35.731: [  OCRMSG][2608776960]prom_listen: Failed to listen at endpoint [1]
2013-08-16 13:27:35.732: [  OCRMSG][2608776960]GIPC error [1] msg [gipcretFail]
2013-08-16 13:27:35.732: [  OCRSRV][2608776960]th_listen: prom_listen failed retval= 24, addr= [(ADDRESS=(PROTOCOL=ipc)(KEY=procr_local_conn_0_PROL))]
2013-08-16 13:27:35.732: [  OCRSRV][2626647616]th_init: Local listener did not reach valid state

			<---- This can mean some issue with network socket file location or permission.  

2013-08-16 13:27:35.732: [  OCRAPI][2626647616]a_init:18!: Thread init unsuccessful : [24]
2013-08-16 13:27:35.776: [  CRSOCR][2626647616] OCR context init failure.  Error: PROCL-24: Error in the messaging layer Messaging error [gipcretFail] [1]
2013-08-16 13:27:35.776: [ default][2626647616] Created alert : (:OHAS00106:) :  OLR initialization failed, error: PROCL-24: Error in the messaging layer Messaging error [gipcretFail] [1]
2013-08-16 13:27:35.776: [ default][2626647616][PANIC] OHASD exiting; Could not init OLR
2013-08-16 13:27:35.777: [ default][2626647616] Done.

After triaging a bit I think we had some wrong permissions on the directory strcutures, plus some hostname stuff wasn’t accurate. Hopefully that was it. Now let’s re-do the following to recover and move forward:

I first tried to stop HAS to ensure its not active

[root@pdb12c grid]# crsctl stop  has -f 
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.

[root@pdb12c grid]  cd /u01/app/oracle/product/12.1.0/grid/crs/install

Let's try to execute deconfig to fix the broken configuration:

[root@pdb12c install]# ./roothas.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
2013/08/17 15:35:04 CLSRSC-357: Failed to stop current Oracle Clusterware stack during upgrade
2013/08/17 15:35:05 CLSRSC-180: An error occurred while executing the command '/etc/init.d/ohasd deinstall' (error code -1)
Failure in execution (rc=-1, 0, Inappropriate ioctl for device) for command /etc/init.d/ohasd deinstall
2013/08/17 15:35:05 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Hopefully this deconfig and stop stack worked:

[

root@pdb12c install]# ps -ef|grep has
root      5679  2968  0 15:35 pts/0    00:00:00 grep has

Stack is down so let's re-run root.sh

[root@pdb12c grid]# ./root.sh
Performing root user operation for Oracle 12c 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.1.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE 
Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node pdb12c successfully pinned.
2013/08/17 15:35:41 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
pdb12c     2013/08/17 15:35:58     /u01/app/oracle/product/12.1.0/grid/cdata/pdb12c/backup_20130817_153558.olr
2013/08/17 15:37:50 CLSRSC-327: Successfully configured Oracle Grid Infrastructure for a Standalone Server

Let’s verify this successful OLR initialization by looking at ohasd.log:

2013-08-17 15:35:46.993: [ default][2945685056] OHASD Daemon Starting. Command string :reboot
2013-08-17 15:35:46.993: [ default][2945685056] OHASD params []
2013-08-17 15:35:46.994: [ default][2945685056]
2013-08-17 15:35:46.994: [ default][2945685056] Initializing OLR
2013-08-17 15:35:46.994: [ default][2945685056]proa_init: OLR Abstraction layer initialization. Bootlevel:[1]
2013-08-17 15:35:46.998: [  OCRAPI][2945685056]a_init: Successfully initialized the patch management context.
2013-08-17 15:35:46.998: [  OCRAPI][2945685056]a_init: Successfully initialized the OLR specific states.
2013-08-17 15:35:46.998: [  OCRAPI][2945685056]a_init:13: Clusterware init successful
2013-08-17 15:35:46.998: [  OCRAPI][2945685056]a_init:15: Successfully initialized the Cache layer.
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: opening OCR device(s)
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: Successfully opened the non-ASM locations if configured.
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: for disk 0 (/u01/app/oracle/product/12.1.0/grid/cdata/localhost/pdb12c.olr), id match (1), total id sets, (1) need recover (0), my votes (0), total votes (0), commit_lsn (1), lsn (1)
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: my id set: (799232119, 1028247821, 0, 0, 0)
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: 1st set: (799232119, 1028247821, 0, 0, 0)
2013-08-17 15:35:46.998: [  OCRRAW][2945685056]proprioo: 2nd set: (0, 0, 0, 0, 0)
2013-08-17 15:35:46.999: [  OCRRAW][2945685056]proprinit: Successfully initialized the I/O module (proprioini).
2013-08-17 15:35:46.999: [  OCRRAW][2945685056]proprinit: Successfully initialized the backend handle (propribctx).
2013-08-17 15:35:46.999: [  OCRAPI][2945685056]proa_init: Successfully initialized the Storage Layer.
2013-08-17 15:35:47.000: [  OCRAPI][2945685056]proa_init: Successfully initlaized the Messaging Layer.
2013-08-17 15:35:47.003: [  OCRAPI][2945685056]a_init:18: Thread init successful
2013-08-17 15:35:47.003: [  OCRAPI][2945685056]a_init:19: Client init successful
2013-08-17 15:35:47.003: [  OCRAPI][2945685056]a_init:21: OLR init successful. Init Level [1]

			<--- this is a good sign, but we still need to ensure OHASD starts up and initializes the CRS Policy Engine:
 
2013-08-17 15:35:47.003: [ default][2945685056] Checking version compatibility...
2013-08-17 15:35:47.003: [ default][2945685056]clsvactversion:4: Retrieving Active Version from local storage.
2013-08-17 15:35:47.004: [ default][2945685056] Version compatibility check passed:  Software Version: 12.1.0.1.0 Release Version: 12.1.0.1.0 Active Version: 12.1.0.1.0
2013-08-17 15:35:47.004: [ default][2945685056] Running mode check...
2013-08-17 15:35:47.004: [ default][2945685056] OHASD running as the Non-Privileged user

			<--- this is a also good sign, getting there...

2013-08-17 15:35:47.190: [   CRSPE][2889111296] {0:0:2} PE Role|State Update: old role [MASTER] new [MASTER]; old state [Starting] new [Running]
			<--- PE is running, getting there some more...

2013-08-17 15:35:47.190: [   CRSPE][2889111296] {0:0:2} Processing pending join requests: 1
2013-08-17 15:35:47.190: [UiServer][2413815552] UI comms listening for GIPC events.
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} Special Value map for : pdb12c
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} CRS_CSS_NODENAME=pdb12c
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} CRS_CSS_NODENUMBER=0
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} CRS_CSS_NODENUMBER_PLUS1=1
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} CRS_HOME=/u01/app/oracle/product/12.1.0/grid
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} Server Attributes for : pdb12c
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} ACTIVE_CSS_ROLE=UNAVAILABLE
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} CONFIGURED_CSS_ROLE=
2013-08-17 15:35:47.191: [   CRSPE][2889111296] {0:0:2} Server [pdb12c] has been registered with the PE data model
2013-08-17 15:35:47.191: [    AGFW][2899617536] {0:0:2} Agfw Proxy Server received the message: PE_HANDHSAKE[Proxy] ID 20487:33
2013-08-17 15:35:47.191: [    AGFW][2899617536] {0:0:2} Received handshake message from PE.

		<--- PE is initialized, getting even closer...

2013-08-17 15:35:47.192: [    AGFW][2899617536] {0:0:2} Added resource type: application
2013-08-17 15:35:47.192: [    AGFW][2899617536] {0:0:2} Added resource type: cluster_resource
2013-08-17 15:35:47.192: [    AGFW][2899617536] {0:0:2} Added resource type: generic_application
2013-08-17 15:35:47.192: [    AGFW][2899617536] {0:0:2} Added resource type: local_resource

I think the basic GI stack is there, let's verify:

[root@pdb12c ohasd]# ps -ef|grep oracle
root      3241  3232  0 14:23 pts/2    00:00:00 su - oracle
oracle    3242  3241  0 14:23 pts/2    00:00:00 -bash
oracle    5959     1  0 15:35 ?        00:00:02 /u01/app/oracle/product/12.1.0/grid/bin/ohasd.bin reboot
oracle    6067     1  0 15:35 ?        00:00:00 /u01/app/oracle/product/12.1.0/grid/bin/oraagent.bin
oracle    6080     1  0 15:35 ?        00:00:00 /u01/app/oracle/product/12.1.0/grid/bin/evmd.bin
oracle    6154  6080  0 15:35 ?        00:00:00 /u01/app/oracle/product/12.1.0/grid/bin/evmlogger.bin -o /u01/app/oracle/product/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.info -l /u01/app/oracle/product/12.1.0/grid/log/[HOSTNAME]/evmd/evmlogger.log


Check crsctl stat res

[root@pdb12c ohasd]# crsctl stat res -init -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ons
               OFFLINE OFFLINE      pdb12c                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        OFFLINE OFFLINE                               STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       pdb12c                   STABLE
--------------------------------------------------------------------------------
Odd that CSS hasn't started yet, and even odder that ASM is not instantiated.  UGH!!
Now we have to do some stitch work.  Again, all this is unnecessary, its because  installer didn't finish its work that we have do this.
So, let's add Local Listener and ASM. In this order !!

[oracle@pdb12c grid]$ srvctl add listener

[oracle@pdb12c grid]$ srvctl config ons
ONS exists: Local port 6100, remote port 6200, EM port 2016
[oracle@pdb12c grid]$ srvctl config listener
Name: LISTENER
Home: /u01/app/oracle/product/12.1.0/grid

[oracle@pdb12c grid]$ srvctl add asm


[oracle@pdb12c grid]$ srvctl config asm
ASM home: /u01/app/oracle/product/12.1.0/grid
Password file: 
ASM listener: LISTENER
Spfile: 
ASM diskgroup discovery string: ++no-value-at-resource-creation--never-updated-through-ASM++

<--- Notice that ASM has no SPFile associated with yet, but we still start it with default parameters

[oracle@pdb12c grid]$ srvctl start asm

[oracle@pdb12c asmca]$ ps -ef|grep asm
root     51260     2  0 15:37 ?        00:00:00 [asmWorkerThread]
root     51261     2  0 15:37 ?        00:00:00 [asmWorkerThread]
root     51262     2  0 15:37 ?        00:00:00 [asmWorkerThread]
root     51263     2  0 15:37 ?        00:00:00 [asmWorkerThread]
root     51264     2  0 15:37 ?        00:00:00 [asmWorkerThread]
oracle   53092     1  0 16:26 ?        00:00:00 asm_pmon_+ASM
oracle   53094     1  0 16:26 ?        00:00:00 asm_psp0_+ASM
oracle   53096     1  3 16:26 ?        00:00:01 asm_vktm_+ASM
oracle   53100     1  0 16:26 ?        00:00:00 asm_gen0_+ASM
oracle   53102     1  0 16:26 ?        00:00:00 asm_mman_+ASM
oracle   53106     1  0 16:26 ?        00:00:00 asm_diag_+ASM
oracle   53108     1  0 16:26 ?        00:00:00 asm_dia0_+ASM
oracle   53110     1  0 16:26 ?        00:00:00 asm_dbw0_+ASM
oracle   53112     1  0 16:26 ?        00:00:00 asm_lgwr_+ASM
oracle   53115     1  0 16:26 ?        00:00:00 asm_ckpt_+ASM
oracle   53117     1  0 16:26 ?        00:00:00 asm_smon_+ASM
oracle   53119     1  0 16:26 ?        00:00:00 asm_lreg_+ASM
oracle   53121     1  0 16:26 ?        00:00:00 asm_rbal_+ASM
oracle   53123     1  0 16:26 ?        00:00:00 asm_gmon_+ASM
oracle   53125     1  0 16:26 ?        00:00:00 asm_mmon_+ASM
oracle   53127     1  0 16:26 ?        00:00:00 asm_mmnl_+ASM


Now run asmca to create the disk group using ASMCA

PDB12c 2013 08 17 16 34 15


Let's check crsctl stat res again

[oracle@pdb12c asmca]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       pdb12c                   STABLE
ora.PDBDATA.dg
               ONLINE  ONLINE       pdb12c                   STABLE
ora.asm
               ONLINE  ONLINE       pdb12c                   Started,STABLE
ora.ons
               OFFLINE OFFLINE      pdb12c                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       pdb12c                   STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       pdb12c                   STABLE
--------------------------------------------------------------------------------

Cool, we now have majority of the stack GI stack started!!  Also notice that the PDBDATA disk group resource got created auto magically when the disk group was created and mounted (this was the same case in 11gR2)

But I wonder how cssd got placed in the ONLINE state, we didn't change that state directly:
The answer to that has to do with the startup dependencies for ASM resource.  In this case its "hard pull-up" dependency, also ASM has a "weak" dependency on Listener, so that got Onlined too.  We can see that from  crsctl stat res ora.asm -p

[oracle@pdb12c asmca]$ crsctl stat res ora.asm -p
NAME=ora.asm
TYPE=ora.asm.type
….
….
START_DEPENDENCIES=hard(ora.cssd) weak(ora.LISTENER.lsnr)

….
….

STOP_DEPENDENCIES=hard(ora.cssd)



Note ASM is using the basic/generic init.ora file.  So let's create a real usable one:

cat $HOME/init+ASM.ora
sga_target=1536M
asm_diskgroups='PDBDATA'
asm_diskstring='/dev/sd*'
instance_type='asm'
remote_login_passwordfile='EXCLUSIVE'

SQL> create spfile='+PDBDATA' from pfile='$HOME/init+ASM.ora'
  2  ;

File created.
To really validate that GI stack is up and running and that ASM is cool, just for fun let's create an ACFS filesystem.  This validates the communication between layers of HAS/CRS stack, ASM as well the Policy Engine:

PDB12c 2013 08 17 16 37 48

...

PDB12c 2013 08 17 16 39 49

[root@pdb12c ohasd]# mkdir -p /u01/app/oracle/acfsmounts/pdbdata_pdbvol1
[root@pdb12c ohasd]# 
[root@pdb12c ohasd]# /bin/mount -t acfs /dev/asm/pdbvol1-339 /u01/app/oracle/acfsmounts/pdbdata_pdbvol1

[root@pdb12c ohasd]# df -ha
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_12crac1-lv_root
                       19G   12G  5.8G  67% /
proc                     0     0     0   -  /proc
sysfs                    0     0     0   -  /sys
devpts                   0     0     0   -  /dev/pts
tmpfs                 1.4G  637M  792M  45% /dev/shm
/dev/sda1             485M   55M  405M  12% /boot
/dev/asm/pdbvol1-339  1.0G   41M  984M   4% /u01/app/oracle/acfsmounts/pdbdata_pdbvol1

ANd there you have it!  A re-sticthed back GI stack.  Again, I hope nobody has to go through that, but now at least you know!!

Now next step is to create the PDB database over ASM!!


Let’s start the 12c Flex install, with the execution of the traditional runInstaller script

Rac12c1 2013 07 30 22 55 32

Let’s choose Install 12c Flex Cluster

Rac12c1 2013 07 30 22 56 04

And yes we picking English… since we live in English-land

Rac12c1 2013 07 30 22 56 29

Now let’s specify the Scan information and yes, we’ll need to define GNS and since we have to use GNS, we’ll need to get DNS domain delegation setup. In our case we have us.viscosityna-test.com as the sub-domain

Rac12c1 2013 07 30 22 57 07

This is the new stuff!! We define which nodes in the cluster will be Hubs and which will be Leaf. Note, you’ll ocassionaly hear the terms Hub and RIM be used interchangeably. It just historical!

Rac12c1 2013 07 30 22 57 42

Let’s specify the interfaces. You all have seen this screen before. But it now got a small twist to it. You can specify a separate “ASM &Private” networks.

Rac12c1 2013 07 30 22 58 32

Now the validation!

Rac12c1 2013 07 30 23 01 02

This step is new too. You have the option to configure Grid Infrastructure Repository,
which is used for storing Cluster Health Monitor (CHM) data. In 11gR2 this was stored
in a Berkley DB database and was created by default. Now this option allows users to
specify a Oracle Database to store the CHM data. This database is a single instance
database that is named MGMTDB by default. It is an internal CRS resource, which has
HA-failover capabilities. I’ll cover this topic in more detail later, but I should
mention that this is the only opportunity to create this repository; i,e, you have
uninstall/reinstall to get this GI repos option.

Rac12c1 2013 07 30 23 00 38

Now the fun stuff! Let’s create the ASM disk group. Note, that if you are configuring a GI repo, then you’ll need a minimum of 5GB disk (for testers and laptop folks).

Rac12c1 2013 07 31 17 47 57

Now define passwords

Rac12c1 2013 07 31 17 48 32

Verify…and yes we’re cool w/ the passwords

Rac12c1 2013 07 31 17 49 00

No IPMI

Rac12c1 2013 07 31 17 49 32

No define the group definitions

Rac12c1 2013 07 31 17 50 06

Where we gonna put the Oracle Home and Oracle Base

Rac12c1 2013 07 31 17 52 41

Now this really cool. I can specify the root password/credentials, for downstream root required actions.

Rac12c1 2013 07 31 18 02 05

Gotta run some fix some things, execute fixup.sh script

Rac12c1 2013 07 31 21 43 21

Now off to the races !!!

Rac12c1 2013 07 31 23 33 54