RAC Grid Infrastructure Startup
Sequence and Important RAC log file
Locations
The following is the RAC Grid Infrastructure startup sequence
The startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the
OHASD process (Oracle High Availability Services Daemon). This daemon spawns 4
processes.
Level 1: OHASD Spawns:
cssdagent - Agent responsible for spawning CSSD.
orarootagent - Agent responsible for managing all root owned ohasd
resources.
oraagent - Agent responsible for managing all oracle owned ohasd
resources.
cssdmonitor - Monitors CSSD and node health (along wth the
cssdagent).
Level 2: OHASD rootagent spawns:
CRSD - Primary daemon responsible for managing cluster resources.
CTSSD - Cluster Time Synchronization Services Daemon
Diskmon
ACFS (ASM Cluster File System) Drivers
Level 2: OHASD oraagent spawns:
MDNSD - Used for DNS lookup
GIPCD - Used for inter-process and inter-node communication
GPNPD - Grid Plug & Play Profile Daemon
EVMD - Event Monitor Daemon
ASM - Resource for monitoring ASM instances
Level 3: CRSD spawns:
orarootagent - Agent responsible for managing all root owned crsd
resources.
oraagent - Agent responsible for managing all oracle owned crsd
resources.
Level 4: CRSD rootagent spawns:
Network resource - To monitor the public network
SCAN VIP(s) - Single Client Access Name Virtual IPs
Node VIPs - One per node
ACFS Registery - For mounting ASM Cluster File System
GNS VIP (optional) - VIP for GNS
Level 4: CRSD oraagent spawns:
ASM Resouce - ASM Instance(s) resource
Diskgroup - Used for managing/monitoring ASM diskgroups.
DB Resource - Used for monitoring and managing the DB and instances
SCAN Listener - Listener for single client access name, listening on
SCAN VIP
Listener - Node listener listening on the Node VIP
Services - Used for monitoring and managing services
ONS - Oracle Notification Service
eONS - Enhanced Oracle Notification Service
GSD - For 9i backward compatibility
GNS (optional) - Grid Naming Service - Performs name resolution
Important Log Locations
Clusterware daemon logs are all under <GRID_HOME>/log/<nodename>.
Structure under <GRID_HOME>/log/<nodename>:
alert<NODENAME>.log - look here first for most clusterware issues
./admin:
./agent:
./agent/crsd:
./agent/crsd/oraagent_oracle:
./agent/crsd/ora_oc4j_type_oracle:
./agent/crsd/orarootagent_root:
./agent/ohasd:
./agent/ohasd/oraagent_oracle:
./agent/ohasd/oracssdagent_root:
./agent/ohasd/oracssdmonitor_root:
./agent/ohasd/orarootagent_root:
./client:
./crsd:
./cssd:
./ctssd:
./diskmon:
./evmd:
./gipcd:
./gnsd:
./gpnpd:
./mdnsd:
./ohasd:
./racg:
./racg/racgeut:
./racg/racgevtf:
./racg/racgmain:
./srvm:
The cfgtoollogs dir under <GRID_HOME> and $ORACLE_BASE contains other
important logfiles. Specifically for rootcrs.pl and configuration assistants like
ASMCA, etc...
ASM logs live under $ORACLE_BASE/diag/asm/+asm/<ASM Instance
Name>/trace
The diagcollection.pl script under <GRID_HOME>/bin can be used to
automatically collect important files for support. Run this as the root user.
For 12.1.0.2
Grid Infrastructure Log file Location Changed from <GRID_HOME>/log//
$HOSTNAME to
$GRID_BASE/diag/crs/$HOSTNAME/crs/trace