Some of you may old enough to recall the song “Secret Agent Man” from Johnny Rivers:
There’s a man who leads a life of danger.
To everyone he meets he stays a stranger.
With every move he makes another chance he takes.
Odds are he won’t live to see tomorrow.

Well that’s how I felt when I was at a customer site recently (well maybe not exactly).

They recently had a issue with a node eviction. That in itself deserves a blog post later.
But anyways, he was asking “what are all these Clusterware processes and how do you even traverse through all the log files”.
After 15 mins of discussion, I realized I had thoroughly confused him.
So I suggested we start from the beginning and firstly try to understand Oracle Clusterware processes, agents, and relationships, then draw up some pictures. Maybe then we’ll have a better feel for hierarchy.

Let’s start with the grand master himself HAS (or OHASD)

OHASD manages clusterware daemons, including CRSD. We’ll discuss CRSD resources and startp in another blog. For now just keep in mind that OHASD starts up CRSD (at some point later in the stack), once CRSD is started, it manages the remaining startup of the stack

The “-init flag” is needed for crsctl to operate on OHASD resources,e.g. crsctl stat res ora.crsd -init
To list resources started by CRSD you would issue just “crsctl stat res”

OHASD resource startup order
ora.gipcd
ora.gpnpd -> Starts ora.mdnsd because of dependency
ora.cssd -> Starts ora.diskmon and ora.cssdmonitor because of dependency
ora.ctssd
ora.evmd
ora.crsd

OHASD has agents that work for him. These agents are oraagent, orarootagent, cssdagent and cssdmonitoragent. Each agent manages and handles very specific OHASD resources, and each agent runs as a specific user (root or, clusterware user).
For example, the ora.cssd resource (as root user) is started and monitored by the ora.cssdagent, whereas ora.asm is handled by the oraagent (running as cluster ware user).

All agent as well as other OHASD resource log files are in the CRS $ORACLE_HOME/log/hostname/agent/{ohasd|crsd}/agentname_owner/agentname_owner.log or in CRS $ORACLE_HOME/log/hostname/resource_name/resource_name.log; respectively.

To find out which agent is associated with a resource issue the following:

[root@rhel59a log]# crsctl stat res ora.cssd -init -p |grep “AGENT_FILENAME”
AGENT_FILENAME=%CRS_HOME%/bin/cssdagent%CRS_EXE_SUFFIX%

For example, for CRSD we find:

[root@rhel59a bin]# crsctl stat res ora.crsd -init -p |grep “AGENT_FILENAME”
AGENT_FILENAME=%CRS_HOME%/bin/orarootagent%CRS_EXE_SUFFIX%

Note, an agent log file can have log messages for more than one resources, since those resources are managed by the same agent.

When I debug a resource, I start by going down the following Clusterware log file tree:
1. Start with Clusterware alert.log

2. Depending on the resource (managed by OHASD or CRSD) I look $ORACLE_HOME/logs//ohasd/ohasd.log or $ORACLE_HOME/logs//crsd/crsd.log

3. Then agent log file, as I mentioned above

4. Then finally to the resources log file itself (that’ll be listed in the agent log)

Item #2 requires a little more discussion, and will be the topic of our next discussion


Posted by Charles Kim, Oracle ACE Director, VMware vExpert

We will review the basics of installing Red Hat Linux Enterprise 6 Update 4 for the Intel 64-bit platform in a virtualized infrastructure to prepare an environment to install an Oracle cluster and database(s). To simplify the process and to demonstrate package installation procedures, we will select the option to install the Basic Server and required packages to create a local yum repository from the installation media to install packages with dependencies.

If you have a Red Hat subscription, you can download RHEL 6 ISO image files from Red Hat’s Software & Download Center customer portal. If you do not already have a subscription, you can obtain a free 30 evaluation subscription from https://access.redhat.com/downloads. Each of the DVD ISO images are about 3-4 GB in size. After you download the ISO image, create a bootable DVD and USB and reboot the system to start the installation.

RHEL64_1.png
In the Boot Menu, if there is no response within 60 seconds, the default option to Install or upgrade an existing system using the GUI will be executed.

RHEL_2.png
You will be given the option to perform a disk check on the installation media. Click on Skip.

RHEL_3.png
The Welcome screen does not accept any actionable inputs to respond to. Click on Next to continue.

RHEL_4.png
Select the language preference to be used for the installation. Please choose English from this option and click on Next

Rhel keyboard
Please select the default U.S. English and click on Next

Rhel basic storage
Select the Basic Storage type and click on Next

Rhel storage device warning

Since this is a fresh install, click on Yes, discard any data button

Enter computer name

Add hostname for the node
Click on Configure Network
Click on Edit

Edit system eth0

Change the Method to Manual
Supply IP and Netmask
Click on Apply
Then click on Close
Then Click on Next

Enter time zone

Select your timezone
Click on Next

Enter root password

Enter the password for root
Click on Next

Weak password

If this is a non-development environment, you will want to choose a more secure password. Since this is my lab, I will choose to Click on Use Anyway and continue.

Installation

Click on Review and Modify partition layout
Click on Next

Select a device

Click on Create

Add a partition

Select /tmp for Mount Point
Enter 4096 for Size (MB)
Click on OK

Format warnings

Click on Format

Warning storage configuration

Click on Write changes to disk

Boot loader

Click Next from the Boot loader list screen

Basic server

Select Basic Server and click on Next
It will perform a dependency check and start to perform the installation

Packages completed

Congratulations

Let’s remount our DVD so that we can copy all the RPMs from the DVD to a centralized location on the file system:
Mount cd

In order to setup a local Yum repository, we need to install the createrepo package. The createrepo package has dependencies on two additional packages: deltarpm and python-deltarpm. To successfully install the createrepo package, we will invoke the rpm command with the -ihv option and provide the names of all three packages:
Rpm install

We have successfully installed the createrepo package. The next step will be to copy all the RPMs from the DVD to an area on the local file system. In my example, I am copying the files to the /tmp file system but you will want to select a more permanent location. After the files are copied, we will invoke the createrepo command and provide the location of the directory where the RPMs were copied to:
Createrepo

We have successfully created our local yum repository. Now we are ready to install and update packages with yum.