Wednesday, January 24, 2018

RAC (Real Application Cluster) Command

SRVCTL command

It can divided into two categories
      Database configuration tasks
      Database instance control tasks

display the registered databases
srvctl config database
srvctl status database -d <database
srvctl status instance -d <database> -i <instance>
srvctl status nodeapps -n <node>
srvctl status service -d <database>
srvctl status asm -n <node>
srvctl stop database -d <database>
srvctl stop instance -d <database> -i <instance>,<instance>
srvctl stop service -d <database> [-s <service><service>] [-i <instance>,<instance>]
srvctl stop nodeapps -n <node>
srvctl stop asm -n <node>

srvctl start database -d <database>
srvctl start instance -d <database> -i <instance>,<instance>
srvctl start service -d <database> -s <service><service> -i <instance>,<instance>
srvctl start nodeapps -n <node>
srvctl start asm -n <node>
srvctl add database -d <database> -o <oracle_home>
srvctl add instance -d <database> -i <instance> -n <node>
srvctl add service -d <database> -s <service> -r <preferred_list>
srvctl add nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl add asm -n <node> -i <asm_instance> -o <oracle_home>

srvctl remove database -d <database> -o <oracle_home>
srvctl remove instance -d <database> -i <instance> -n <node>
srvctl remove service -d <database> -s <service> -r <preferred_list>
srvctl remove nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl asm remove -n <node>

Services are used to manage the workload in Oracle RAC, the important features of services are
      used to distribute the workload
      can be configured to provide high availability
      provide a transparent way to direct workload
The view v$services contains information about services that have been started on that instance, here is a list from a fresh RAC installation
The table above is described below
      Goal - allows you to define a service goal using service time, throughput or none
      Connect Time Load Balancing Goal - listeners and mid-tier servers contain current information about service performance
      Distributed Transaction Processing - used for distributed transactions
      AQ_HA_Notifications - information about nodes being up or down will be sent to mid-tier servers via the advance queuing mechanism
      Preferred and Available Instances - the preferred instances for a service, available ones are the backup instances
You can administer services using the following tools
      EM (Enterprise Manager)
      Server Control (srvctl)
Two services are created when the database is first installed, these services are running all the time and cannot be disabled.
      sys$background - used by an instance's background processes only
      sys$users - when users connect to the database without specifying a service they use this service
srvctl add service -d D01 -s BATCH_SERVICE -r node1,node2 -a node3

Note: the options are describe below

-d - database
-s - the service
-r - the service will running on the these nodes
-a - if nodes in the -r list are not running then run on this node
srvctl remove service -d D01 -s BATCH_SERVICE
srvctl start service -d D01 -s BATCH_SERVICE
srvctl stop service -d D01 -s BATCH_SERVICE
srvctl status service -d D10 -s BATCH_SERVICE

Cluster Ready Services (CRS)
CRS is Oracle's clusterware software; you can use it with other third-party clusterware software, though it is not required (apart from HP True64).
CRS is start automatically when the server starts; you should only stop this service in the following situations
      Applying a patch set to $ORA_CRS_HOME
      O/S maintenance
      Debugging CRS problems
CRS Administration
## Starting CRS using Oracle 10g R1
not possible
## Starting CRS using Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl start crs
## Stopping CRS using Oracle 10g R1
srvctl stop -d database <database>
srvctl stop asm -n <node>
srvctl stop nodeapps -n <node>
/etc/init.d/ stop

## Stopping CRS using Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl stop crs
## stop CRS restarting after a reboot, basically permanent over reboots

## Oracle 10g R1
/etc/init.d/ [disable|enable]

## Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl [disable|enable] crs
$ORA_CRS_HOME/bin/crsctl check crs
$ORA_CRS_HOME/bin/crsctl check evmd
$ORA_CRS_HOME/bin/crsctl check cssd
$ORA_CRS_HOME/bin/crsctl check crsd
$ORA_CRS_HOME/bin/crsctl check install -wait 600
Resource Applications (CRS Utilities)
$ORA_CRS_HOME/bin/crs_stat -t
$ORA_CRS_HOME/bin/crs_stat -ls
$ORA_CRS_HOME/bin/crs_stat -p

-t more readable display
-ls permission listing
-p parameters
create profile
register/unregister application
Start/Stop an application
Resource permissions
Relocate a resource
member number/name
olsnodes -n

Note: the olsnodes command is located in $ORA_CRS_HOME/bin
local node name
olsnodes -l
activates logging
olsnodes -g
Oracle Interfaces
oifcfg getif
oicfg delig -global
oicfg setif -global <interface name>/<subnet>:public
oicfg setif -global <interface name>/<subnet>:cluster_interconnect
Global Services Daemon Control
gsdctl start
gsdctl stop
gsdctl status
Cluster Configuration (clscfg is used during installation)
create a new configuration
clscfg -install

Note: the clscfg command is located in $ORA_CRS_HOME/bin
upgrade or downgrade and existing configuration
clscfg -upgrade
clscfg -downgrade
add or delete a node from the configuration
clscfg -add
clscfg -delete
create a special single-node configuration for ASM
clscfg -local
brief listing of terminology used in the other nodes
clscfg -concepts
used for tracing
clscfg -trace
clscfg -h
Cluster Name Check
print cluster name
cemutlo -n

Note: in Oracle 9i the ulity was called "cemutls", the command is located in $ORA_CRS_HOME/bin
print the clusterware version
cemutlo -w

Note: in Oracle 9i the ulity was called "cemutls"
Node Scripts
Add Node

Note: see
adding and deleting nodes
Delete Node

see adding and deleting nodes

Oracle Cluster Registry (OCR)
OCR is the registry that contains information
      Node list
      Node membership mapping
      Database instance, node and other mapping information
      Characteristics of any third-party applications controlled by CRS
The file location is specified during the installation, the file pointer indicating the OCR device location is the ocr.loc, this can be in either of the following
      linux - /etc/oracle
      solaris - /var/opt/oracle
The file contents look something like below, this was taken from my installation
OCR is import to the RAC environment and any problems must be immediately actioned, the command can be found in located in $ORA_CRS_HOME/bin
OCR Utilities
log file

Note: will return the OCR version, total space allocated, space used, free space, location of each device and the result of the integrity check
dump contents

Note: by default it dumps the contents into a file named OCRDUMPFILE in the current directory
ocrconfig -export <file>

ocrconfig -restore <file>
# show backups
ocrconfig -showbackup

# to change the location of the backup, you can even specify a ASM disk
ocrconfig -backuploc <path|+asm>

# perform a backup, will use the location specified by the -backuploc location
ocrconfig -manualbackup

# perform a restore
ocrconfig -restore <file>

# delete a backup
orcconfig -delete <file>

Note: there are many more option so see the ocrconfig man page
## add/relocate the ocrmirror file to the specified location
ocrconfig -replace ocrmirror '/ocfs2/ocr2.dbf'

## relocate an existing OCR file
ocrconfig -replace ocr '/ocfs1/ocr_new.dbf'

## remove the OCR or OCRMirror file
ocrconfig -replace ocr
ocrconfig -replace ocrmirror

Voting Disk
The voting is used to resolve membership issues in the event of a partitioned cluster, the voting disk protects data integrity.
crsctl query css votedisk
crsctl add css votedisk <file>
crsctl delete css votedisk <file>

Backup and Recovery of Votedisk and OCR:
1. Vote Disk
$crsctl query css votedisk
$dd if=/ustrac12/crsdata/votedisk1  of=/u02/env/votedisk1
$ Remove Votedisks
          $rm -rf /ustrac12/crsdata/votedisk*
          Restore :
                    dd if=/u02/env/votedisk1 of=/ustrac12/crsdata/votedisk1
          #crsctl start crs
Add Voting Disk :
                             crsctl add css votedisk <new voting disk path>
Remove Votedisk
                             # crsctl delete css votedisk <old voting disk path>
Add Voting Disk (force)
                              crsctl add css votedisk <new voting disk path> -force
Remove Votedisk
                             # crsctl delete  css votedisk <old voting disk path> -force
2. OCR Files
$ocfconfig -showbackup
$ocrconfig -backuploc <newlocation>
$Logical Backup
                             ocrconfig -export <newlocation>

2.1 Recover using Physical backups
          $ocrconfig -showbackup
          $ocrconfig -backupfile file_name
Stop CRS on all Nodes
                                      #crsctl stop crs
Restore Physical Backups
                                      #ocrconfig -restore <>day.ocr
Restart CRS on all Nodes
                                       Check CRS Integrity
                                      $cluvfy comp ocr -n all
2.2 Recover using Logical backups
1. Locate Logical backup
2. #crsctl stop crs
3. Restore OCR Backup
                             #ocrconfig -import /shared/export/ocrback.dmp
Check CRS Integrity
$cluvfy comp ocr -n all
2.3 Replace Mirrior
                             # ocrcheck
# ocrconfig –replace ocrmirror /oradata/OCR2

Change Public/Interconnect IP Subnet Configuration:
$ <CRS HOME>/bin/oifcfg getif
          eth0 global public
          eth1 global cluster_interconnect
          $ oifcfg delif -global eth0
          $ oifcfg setif –global eth0/
          $ oifcfg delif –global eth1
          $ oifcfg setif –global eth1/
          $ oifcfg getif
          eth0 global public
          eth1 global cluster_interconnect

Diagnostic Collection
# export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
                    # export ORA_CRS_HOME=/u01/crs1020
                    # export ORACLE_BASE= =/u01/app/oracle
                    # cd $ORA_CRS_HOME/bin
                    # ./ –collect

No comments:

Post a Comment