Tuesday, January 30, 2018

Manage the archivelogs using RMAN


Your are told to backup your Oracle database and its archived redo logs with RMAN.  If we backup archived redo logs with RMAN we ideally manage them with RMAN as well.
Here are some useful RMAN commands for archive log management.  

With this command we list all the archive logs current known in the control file 

RMAN > list archivelog all;

With this command we backup the all archive logs currently known in the control file. ( and the control file is updated )

RMAN> backup archivelog all;

With this command we backup the all archive logs currently known in the control file which haven' t been backed up to disk yet.  ( and the control file is updated )

RMAN> backup archivelog all not backed up 1 times;

With this command we backup the all archive logs currently known in the control file and once backed up we delete them ( and the control file is updated )

RMAN> backup archivelog all delete input ;

With this command we list all archive logs currently known in the control file and backed up at least once to disk.

RMAN> list archivelog all backed up 1 times to DISK;

List of Archived Log Copies for database with db_unique_name SILVER
=====================================================================
Key Thrd Seq S Low Time 
------- ---- ------- - ---------
1903 1 222 A 15-OCT-13
Name: /u02/fast_recovery_area/SILVER/archivelog/2013_10_22/o1_mf_1_222_96dzdz10_.arc
1904 1 223 A 22-OCT-13
Name: /u02/fast_recovery_area/SILVER/archivelog/2013_10_25/o1_mf_1_223_96njbc7c_.arc

With this command we delete all the archive logs currently known in the control file and backed up at least once to disk. ( and the control file is updated )

RMAN> delete archivelog all backed up 1 times to DISK;

With this command we delete all the archive logs currently known in the control file and backed up at least once to tape. ( and the control file is updated )

RMAN> delete archivelog all backed up 1 times to SBT_TAPE;

With this command we delete all the archive logs currently known in the control file and backed up at least once to disk except those generated the latest 4 hours ( and the control file is updated ) . 

RMAN> delete archivelog all backed up 1 times to disk completed before 'sysdate - 4/24';

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=423 device type=DISK
List of Archived Log Copies for database with db_unique_name SILVER
=====================================================================
Key Thrd Seq S Low Time 
------- ---- ------- - ---------
1903 1 222 A 15-OCT-13
Name: /u02/fast_recovery_area/SILVER/archivelog/2013_10_22/o1_mf_1_222_96dzdz10_.arc

Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/u02/fast_recovery_area/SILVER/archivelog/2013_10_22/o1_mf_1_222_96dzdz10_.arc RECID=1903 STAMP=829495746
Deleted 1 objects

With this command we verify whether the archive logs currently known in the control file physically exist, the archive logs  which aren' t found got the flag EXPIRED 

RMAN> crosscheck archivelog all;

With this command we update the control file. Archive logs physically purged and logically flagged as EXPIRED  ( see above ) are removed from the control file repository

RMAN> delete expired archivelog all;

With this command we verify whether 1 archive log for a specific redo log sequence physically exists, ( is available on disk )

RMAN> crosscheck archivelog sequence 87174;

With this command we verify whether there is a backup for 1 archive log for a specific redo log sequence.

RMAN> crosscheck backup of archivelog sequence 87174;

Wednesday, January 24, 2018

RAC Simplify docsWelcome.....

RAC (Real Application Cluster) Command

SRVCTL command

It can divided into two categories
      Database configuration tasks
      Database instance control tasks

display the registered databases
srvctl config database
status
srvctl status database -d <database
srvctl status instance -d <database> -i <instance>
srvctl status nodeapps -n <node>
srvctl status service -d <database>
srvctl status asm -n <node>
stopping/starting
srvctl stop database -d <database>
srvctl stop instance -d <database> -i <instance>,<instance>
srvctl stop service -d <database> [-s <service><service>] [-i <instance>,<instance>]
srvctl stop nodeapps -n <node>
srvctl stop asm -n <node>

srvctl start database -d <database>
srvctl start instance -d <database> -i <instance>,<instance>
srvctl start service -d <database> -s <service><service> -i <instance>,<instance>
srvctl start nodeapps -n <node>
srvctl start asm -n <node>
adding/removing
srvctl add database -d <database> -o <oracle_home>
srvctl add instance -d <database> -i <instance> -n <node>
srvctl add service -d <database> -s <service> -r <preferred_list>
srvctl add nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl add asm -n <node> -i <asm_instance> -o <oracle_home>

srvctl remove database -d <database> -o <oracle_home>
srvctl remove instance -d <database> -i <instance> -n <node>
srvctl remove service -d <database> -s <service> -r <preferred_list>
srvctl remove nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl asm remove -n <node>





SERVICES
Services are used to manage the workload in Oracle RAC, the important features of services are
      used to distribute the workload
      can be configured to provide high availability
      provide a transparent way to direct workload
The view v$services contains information about services that have been started on that instance, here is a list from a fresh RAC installation
The table above is described below
      Goal - allows you to define a service goal using service time, throughput or none
      Connect Time Load Balancing Goal - listeners and mid-tier servers contain current information about service performance
      Distributed Transaction Processing - used for distributed transactions
      AQ_HA_Notifications - information about nodes being up or down will be sent to mid-tier servers via the advance queuing mechanism
      Preferred and Available Instances - the preferred instances for a service, available ones are the backup instances
You can administer services using the following tools
      DBCA
      EM (Enterprise Manager)
      DBMS_SERVICES
      Server Control (srvctl)
Two services are created when the database is first installed, these services are running all the time and cannot be disabled.
      sys$background - used by an instance's background processes only
      sys$users - when users connect to the database without specifying a service they use this service
add
srvctl add service -d D01 -s BATCH_SERVICE -r node1,node2 -a node3

Note: the options are describe below

-d - database
-s - the service
-r - the service will running on the these nodes
-a - if nodes in the -r list are not running then run on this node
remove
srvctl remove service -d D01 -s BATCH_SERVICE
start
srvctl start service -d D01 -s BATCH_SERVICE
stop
srvctl stop service -d D01 -s BATCH_SERVICE
status
srvctl status service -d D10 -s BATCH_SERVICE

Cluster Ready Services (CRS)
CRS is Oracle's clusterware software; you can use it with other third-party clusterware software, though it is not required (apart from HP True64).
CRS is start automatically when the server starts; you should only stop this service in the following situations
      Applying a patch set to $ORA_CRS_HOME
      O/S maintenance
      Debugging CRS problems
CRS Administration
starting
## Starting CRS using Oracle 10g R1
not possible
## Starting CRS using Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl start crs
stopping
## Stopping CRS using Oracle 10g R1
srvctl stop -d database <database>
srvctl stop asm -n <node>
srvctl stop nodeapps -n <node>
/etc/init.d/init.crs stop

## Stopping CRS using Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl stop crs
disabling/enabling
## stop CRS restarting after a reboot, basically permanent over reboots

## Oracle 10g R1
/etc/init.d/init.crs [disable|enable]

## Oracle 10g R2
$ORA_CRS_HOME/bin/crsctl [disable|enable] crs
checking
$ORA_CRS_HOME/bin/crsctl check crs
$ORA_CRS_HOME/bin/crsctl check evmd
$ORA_CRS_HOME/bin/crsctl check cssd
$ORA_CRS_HOME/bin/crsctl check crsd
$ORA_CRS_HOME/bin/crsctl check install -wait 600
Resource Applications (CRS Utilities)
status
$ORA_CRS_HOME/bin/crs_stat
$ORA_CRS_HOME/bin/crs_stat -t
$ORA_CRS_HOME/bin/crs_stat -ls
$ORA_CRS_HOME/bin/crs_stat -p

Note:
-t more readable display
-ls permission listing
-p parameters
create profile
$ORA_CRS_HOME/bin/crs_profile
register/unregister application
$ORA_CRS_HOME/bin/crs_register
$ORA_CRS_HOME/bin/crs_unregister
Start/Stop an application
$ORA_CRS_HOME/bin/crs_start
$ORA_CRS_HOME/bin/crs_stop
Resource permissions
$ORA_CRS_HOME/bin/crs_getparam
$ORA_CRS_HOME/bin/crs_setparam
Relocate a resource
$ORA_CRS_HOME/bin/crs_relocate
Nodes
member number/name
olsnodes -n

Note: the olsnodes command is located in $ORA_CRS_HOME/bin
local node name
olsnodes -l
activates logging
olsnodes -g
Oracle Interfaces
display
oifcfg getif
delete
oicfg delig -global
set
oicfg setif -global <interface name>/<subnet>:public
oicfg setif -global <interface name>/<subnet>:cluster_interconnect
Global Services Daemon Control
starting
gsdctl start
stopping
gsdctl stop
status
gsdctl status
Cluster Configuration (clscfg is used during installation)
create a new configuration
clscfg -install

Note: the clscfg command is located in $ORA_CRS_HOME/bin
upgrade or downgrade and existing configuration
clscfg -upgrade
clscfg -downgrade
add or delete a node from the configuration
clscfg -add
clscfg -delete
create a special single-node configuration for ASM
clscfg -local
brief listing of terminology used in the other nodes
clscfg -concepts
used for tracing
clscfg -trace
help
clscfg -h
Cluster Name Check
print cluster name
cemutlo -n

Note: in Oracle 9i the ulity was called "cemutls", the command is located in $ORA_CRS_HOME/bin
print the clusterware version
cemutlo -w

Note: in Oracle 9i the ulity was called "cemutls"
Node Scripts
Add Node
addnode.sh

Note: see
adding and deleting nodes
Delete Node
deletenode.sh

Note:
see adding and deleting nodes


Oracle Cluster Registry (OCR)
OCR is the registry that contains information
      Node list
      Node membership mapping
      Database instance, node and other mapping information
      Characteristics of any third-party applications controlled by CRS
The file location is specified during the installation, the file pointer indicating the OCR device location is the ocr.loc, this can be in either of the following
      linux - /etc/oracle
      solaris - /var/opt/oracle
The file contents look something like below, this was taken from my installation
orc.loc
ocrconfig_loc=/u02/oradata/racdb/OCRFile
ocrmirrorconfig_loc=/u02/oradata/racdb/OCRFile_mirror
local_only=FALSE
OCR is import to the RAC environment and any problems must be immediately actioned, the command can be found in located in $ORA_CRS_HOME/bin
OCR Utilities
log file
$ORA_HOME/log/<hostname>/client/ocrconfig_<pid>.log
checking
ocrcheck

Note: will return the OCR version, total space allocated, space used, free space, location of each device and the result of the integrity check
dump contents
ocrdump

Note: by default it dumps the contents into a file named OCRDUMPFILE in the current directory
export/import
ocrconfig -export <file>

ocrconfig -restore <file>
backup/restore
# show backups
ocrconfig -showbackup

# to change the location of the backup, you can even specify a ASM disk
ocrconfig -backuploc <path|+asm>

# perform a backup, will use the location specified by the -backuploc location
ocrconfig -manualbackup

# perform a restore
ocrconfig -restore <file>

# delete a backup
orcconfig -delete <file>

Note: there are many more option so see the ocrconfig man page
add/remove/replace
## add/relocate the ocrmirror file to the specified location
ocrconfig -replace ocrmirror '/ocfs2/ocr2.dbf'

## relocate an existing OCR file
ocrconfig -replace ocr '/ocfs1/ocr_new.dbf'

## remove the OCR or OCRMirror file
ocrconfig -replace ocr
ocrconfig -replace ocrmirror


Voting Disk
The voting is used to resolve membership issues in the event of a partitioned cluster, the voting disk protects data integrity.
querying
crsctl query css votedisk
adding
crsctl add css votedisk <file>
deleting
crsctl delete css votedisk <file>




Backup and Recovery of Votedisk and OCR:
1. Vote Disk
$crsctl query css votedisk
$dd if=/ustrac12/crsdata/votedisk1  of=/u02/env/votedisk1
$ Remove Votedisks
          $rm -rf /ustrac12/crsdata/votedisk*
          Restore :
                    dd if=/u02/env/votedisk1 of=/ustrac12/crsdata/votedisk1
          #crsctl start crs
Add Voting Disk :
                             crsctl add css votedisk <new voting disk path>
Remove Votedisk
                             # crsctl delete css votedisk <old voting disk path>
Add Voting Disk (force)
                              crsctl add css votedisk <new voting disk path> -force
Remove Votedisk
                             # crsctl delete  css votedisk <old voting disk path> -force
2. OCR Files
$ocfconfig -showbackup
$ocrconfig -backuploc <newlocation>
$Logical Backup
                             ocrconfig -export <newlocation>

2.1 Recover using Physical backups
          $ocrconfig -showbackup
          $ocrconfig -backupfile file_name
Stop CRS on all Nodes
                                      #crsctl stop crs
Restore Physical Backups
                                      #ocrconfig -restore <>day.ocr
Restart CRS on all Nodes
                                       Check CRS Integrity
                                      $cluvfy comp ocr -n all
2.2 Recover using Logical backups
1. Locate Logical backup
2. #crsctl stop crs
3. Restore OCR Backup
                             #ocrconfig -import /shared/export/ocrback.dmp
Check CRS Integrity
$cluvfy comp ocr -n all
2.3 Replace Mirrior
                             # ocrcheck
# ocrconfig –replace ocrmirror /oradata/OCR2

Change Public/Interconnect IP Subnet Configuration:
$ <CRS HOME>/bin/oifcfg getif
          eth0 139.2.156.0 global public
          eth1 192.168.0.0 global cluster_interconnect
          $ oifcfg delif -global eth0
          $ oifcfg setif –global eth0/139.2.166.0:public
          $ oifcfg delif –global eth1
          $ oifcfg setif –global eth1/192.168.1.0:cluster_interconnect
          $ oifcfg getif
          eth0 139.2.166.0 global public
          eth1 192.168.1.0 global cluster_interconnect

Diagnostic Collection
# export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
                    # export ORA_CRS_HOME=/u01/crs1020
                    # export ORACLE_BASE= =/u01/app/oracle
                    # cd $ORA_CRS_HOME/bin
                    # ./diagcollection.pl –collect