Wednesday, January 25, 2017

Prerequisite check “CheckActiveFilesAndExecutables” failed (files are active) in Patching activity.



During apply patch with opatch command, on windows we will get the following error:
Prerequisite check “CheckActiveFilesAndExecutables” failed
On Windows System, When you try to apply the Patch after clean shutdown.
Sometime following error occured:
Following files are active :
d:\oracle\SID\11203\bin\oracommon11.dll
d:\oracle\SID\11203\bin\oraclient11.dll
d:\oracle\SID\11203\dbhome_1\bin\orageneric11.dll
d:\oracle\SID\\11203\bin\orapls11.dll
d:\oracle\SID\11203\bin\oran11.dll
d:\oracle\SID\11203\bin\oraxml11.dll
d:\oracle\SID\11203\bin\oci.dll
d:\oracle\SID\11203\bin\orannzsbb11.dll
d:\oracle\SID\11203\bin\oraasmclnt11.dll
INFO:Prerequisite check “CheckActiveFilesAndExecutables” failed.
OPatch failed with error code = 74/44
This is the common error occurred during the patch applied on window system.
One command is available for check which process is holdling lock on these files
Tasklist /m ora*
You can check with the tasklist command. It will provide you list on cmd.
Note:
1. Winmgmt (Windows Management Instrumentation ) is the service used by tasklist command.
If you already disable this service as mentioned in some readme file then you need to enable and start to run tasklist command
2. You always used cmd.ext run as administrator so that you will able to see other user open task also.
Suppose if you find the ora* files is used by some jave.exe then you can also kill that process.
Note: Before fire kill command check at internet what the effect of process if you kill it.
Sometime window process locked and it will cause server shutdown if you kill window process. So always check effect first.
IF some application java.exe is used you can kill them.
taskkill /pid processid_number

Source From  Sandeep Singh
https://smarttechways.com/2015/03/09/prerequisite-check-checkactivefilesandexecutables-failed-files-are-active-in-patching-activity/comment-page-1/#comment-531

Thursday, January 19, 2017

Step To Generate AWR Report





Step 1: Go to $ORACLE_HOME/rdbms/admin





Step 2: Run command ls -lrt awr*















Step 3: Connect to sqlplus










Step 4:  From the sql prompt run awrrpt (or) awrrpti (for specified instance)

             @awrrpt.sql (if u are in $ORACLE_HOME/rdbms/admin)

             or if You are at any location then

              SQL>   @$ORACLE_HOME/rdbms/admin/awrrpt.sql

Step 5: it will now ask for the report as either ‘HTML’ or ‘TEXT’. (choose one)

             SQL> @awrrpt.sql

            Current Instance
            ~~~~~~~~~~~~~~~~

             DB Id              DB Name      Inst Num    Instance
            -----------            ------------ -------- --------    ----
            2735184012      ABC                1               ABC


            Specify the Report Type
             ~~~~~~~~~~~~~~~~~~~~~~~
           Would you like an HTML report, or a plain text report?
           Enter 'html' for an HTML report, or 'text' for plain text
           Defaults to 'html'
           Enter value for report_type: HTML(I am selecting HTML)


Step 6: Select number of days you want to go back or just hit enter for listing all completed snapshots.

           For Example

          Specify the number of days of snapshots to choose from
          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
          Entering the number of days (n) will result in the most recent
          (n) days of snapshots being listed.  Pressing <return> without
          specifying a number lists all completed snapshots.


           Enter value for num_days:

          (just Hit enter to get all snapshot details)


Step 7: Then specify Begin and End snapshot Ids.
 
            For E.g

            Specify the Begin and End Snapshot Ids
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            Enter value for begin_snap: 20429
            Begin Snapshot Id specified: 20429

            Enter value for end_snap: 20431


Step 8: Here you specify the name of the report or select the default name assigned.

Step 9: The report gets generated.

Step 10: Exit SQLPLUS.

Step 11: Run command ls –ltr newname to show the new file created under the path you are in

ORA:04031 ( Out Of Memory)


Hi , this is the very frequent issue  that comes in database. Here I am sharing some of option to analyse the root cause.


* First of all we need to check trace file and alert log, may be trace file can give you root cause.

* This error also come due to swap memory issue, so we have to consider that on also.

* Issue is due to shared pool fragmentation, which can re-occur as the load increases and may        in a day or two days.

* The primary cause for ORA-04031 is also Hard Parse. So for this we have to genrate AWR        report during the problem Time and should check the load profile section for Hard-                    Parses/Section.

* This should be secound option, we can query v$memory_resize_ops view to get details of            memory consumption.
   (check component,initial_size, target_size,start_time,end_time...) memory resizing occur if          there is a memory pressure.

SQL> desc v$memory_resize_ops;


 Name                                              Null?    Type
 ----------------------------------------- -------- ----------------------------
 COMPONENT                                          VARCHAR2(64)
 OPER_TYPE                                             VARCHAR2(13)
 OPER_MODE                                           VARCHAR2(9)
 PARAMETER                                           VARCHAR2(80)
 INITIAL_SIZE                                          NUMBER
 TARGET_SIZE                                         NUMBER
 FINAL_SIZE                                             NUMBER
 STATUS                                                      VARCHAR2(9)
 START_TIME                                            DATE

 END_TIME                                                DATE






Saturday, January 7, 2017

ORA-01031: insufficient privileges and 'INSERT INTO SAP_SDBAH (BEG, FUNCT, SYSID, OBJ, RC, ENDE, ACTID, LINE) VALUES ...

Problem:
-------------

BR0051I BRBACKUP 7.20 (23)
BR0169W Value 'ibs=' of parameter/option 'dd_in_flags' ignored for 'Windows' - 'bs=' assumed
BR0055I Start of database backup: beizigjr.and 2012-07-14 22.00.27
BR0484I BRBACKUP log file: C:\oracle\ORCL\sapbackup\beizigjr.and
BR0280I BRBACKUP time stamp: 2012-07-14 22.00.27
BR0301E SQL error -1031 at location BrInitOraCreate-2, SQL statement:
'CONNECT sapsr3/****** AT PROF_CONN IN SYSOPER MODE'
ORA-01031: insufficient privileges
BR0303E Determination of Oracle version failed
BR0280I BRBACKUP time stamp: 2012-07-14 22.00.27
BR0301W SQL error -980 at location BrbDbLogOpen-5, SQL statement:
'INSERT INTO SAP_SDBAH (BEG, FUNCT, SYSID, OBJ, RC, ENDE, ACTID, LINE) VALUES ('20120714220027', 'and', 'ORCL', ' ', '9999', ' ', 'beizigjr', '7.20 (23)')'
ORA-00980: synonym translation is no longer valid
BR0324W Insertion of database log header failed


Solution:
--------------
1 ) The backup details are stored in the tables SDBAH and SDBAD . So create these tables first.

create table sapsr3.sdbah (beg varchar2(14) default ' ' not null, funct varchar2(3) default ' ' not null,sysid varchar2(8) default ' ' not null,obj varchar2(16) default ' ' not null,rc varchar2(4) default '0000' not null,ende varchar2(14) default ' ' not null,actid varchar2(16) default ' ' not null,line varchar2(254) default ' ' not null) tablespace psapdba storage (initial 64K next 64K pctincrease 0 minextents 1 maxextents 100);

create table sapsr3.sdbad (beg varchar2(14) default ' ' not null, funct varchar2(3) default ' ' not null,sysid varchar2(8) default ' ' not null,pos varchar2(4) default '0000' not null,line varchar2(254) default ' ' not null) tablespace psapdba storage (initial 256K next 256K pctincrease 0 minextents 1 maxextents 100);

create unique index sapsr3.sdbah__0 on sapsr3.sdbah (beg, funct) tablespace psapdba storage (initial 16K next 16K pctincrease 0 minextents 1 maxextents 100);

create unique index sapsr3.sdbad__0 on sapsr3.sdbad (beg, funct, pos) tablespace psapdba storage (initial 64K next 64K pctincrease 0 minextents 1 maxextents 100);

2 ) 'CONNECT sapsr3/****** AT PROF_CONN IN SYSOPER MODE' ORA-01031: insufficient privileges

Actually this error comes as this user SAPSR3 does not have the sysoper previlage.

SQL> connect sapsr3/abc@1234 as sysdba;
Connected.
SQL> grant sysoper to sapsr3;
Grant succeeded.
SQL> connect sapsr3/satish as sysoper;
Connected.
SQL>

3) Run the sapconn_role.sql and sapdba_role.sql

SQL>@sapconn_role.sql
SQL>@sapdba_role.sql <SAPSchemaID>
for SAPSR3 SAPSchemaID is SR3
for SAPDEV SAPSchemaID is DEV

It will create two roles sapconn and sapdba
And assign the two roles to sapsr3
SQL>grant sapconn to sapsr3
SQL>grant sapdba to sapsr3

4.Run the profiles sapuprof_profile.sql
Password expires after 180 days ,so Run the profiles sapuprof_profile.sql
SQL>@sapuprof_profile.sql

And the above error is resolved.



Please follow the below notes for some better understanding.

91216 - BRBACKUP-SAPDBA ORA-01031 Insufficient privileges
126248 - SAPDBA Check causes ora-01031
761745 - Oracle 9 ORA-01031 and ORA-06512 with DBMS_STATS
900525 - BRCONNECT fails with ORA-01031 at location BrDdartRead-1
1028220 - ORA-01031 Insufficient privileges despite SAPCONN role
834917 - Oracle Database 10g New database role SAPCONN
134592 - Importing the SAPDBA role (sapdba_role.sql)
320457 - Installing BR tools on a non-ABAP database

If after kernel upgradation, stats goes fail, then we can try also this...

Go under /usr/sap/SID/SYS/run/exe and run
sqlplus /nolog @sapdba_role.sql SR3 <- -- For ABAP sceham
sqlplus /nolog @sapdba_role.sql SR3DB <- - -For Java Schema
Check the sapdba_role.log outputfile ..

Hope it helps

How to determine an index needs to be rebuilt


Summary
          An Oracle server index is a schema object that can speed up the retrieval of rows by using a pointer.
You can create indexes on one or more columns of a table to speed SQL statement execution on that table. If you do not have an index on the column, then a full table scan occurs.
You can reduce disk I/O by using a rapid path access method to locate data quickly. By default, Oracle creates B-tree indexes.
After a table experiences a large number of inserts, updates, and deletes, the index can become unbalanced and fragmented and can hinder query performance.
Knowing when to Rebuild Indexes
We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE command.
The VALIDATE STRUCTURE command can be safely executed without affecting the optimizer. The VALIDATE STRUCTURE command populates the SYS.INDEX_STATS table only. The SYS.INDEX_STATS table can be accessed with the public synonym INDEX_STATS. The INDEX_STATS table will only hold validation information for one index at a time. You will need to query this table before validating the structure of the next index.
Below is a sample output from INDEX_STATS Table.
SQL> ANALYZE INDEX IDX_GAM_ACCT VALIDATE STRUCTURE;
 
Statement processed.
 
SQL> SELECT name, height,lf_rows,lf_blks,del_lf_rows FROM
INDEX_STATS;
NAME                      HEIGHT    LF_ROWS    LF_BLKS    DEL_LF_ROW
---------------------- -----------   ----------      ----------   ----------------
DX_GAM_ACCT           2             1                     3               6
 
1 row selected.
 
There are two rules of thumb to help determine if the index needs to be rebuilt.
1)     If the index has height greater than four, rebuild the index.
2)     The deleted leaf rows should be less than 20%.
If it is determined that the index needs to be rebuilt, this can easily be accomplished by the ALTER INDEX <INDEX_NAME> REBUILD | REBULID ONLINE command. It is not recommended, this command could be executed during normal operating hours. The alternative is to drop and re-create the index. Creating an index uses the base table as its data source that needs to put a lock on the table. The index is also unavailable during creation.
 In this example, the HEIGH column is clearly showing the value 2. This is not a good candidate for rebuilding. For most indexes, the height of the index will be quite low, i.e. one or two. I have seen an index on a 2 million-row table that had height two or three. An index with height greater than four may need to be rebuilt as this might indicate a skewed tree structure. This can lead to unnecessary database block reads of the index. Let’s take another example.
SQL> ANALYZE INDEX IDX_NAME VALIDATE STRUCTURE;
 
Statement processed.
 
SQL> SELECT name, height, lf_rows, del_lf_rows, (del_lf_rows/lf_rows)
*100 as ratio FROM INDEX_STATS;
 
NAME                           HEIGHT     LF_ROWS    DEL_LF_ROW RATIO    
------------------------------ ---------- ---------- ---------- -------
IDX_NAME                                1          189         62        32.80
 
1 row selected.
 
In this example, the ratio of deleted leaf rows to total leaf rows
is clearly above 20%. This is a good candidate for rebuilding.
Let’s rebuild the index and examine the results
 
SQL> ANALYZE INDEX IDX_NAME REBUILD;
 
Statement processed.
 
SQL> ANALYZE INDEX IDX_NAME VALIDATE STRUCTURE;
 
Statement processed.
 
SQL> SELECT name, height, lf_rows, del_lf_rows, (del_lf_rows/lf_rows)*
100 as ratio FROM INDEX_STATS;
 
NAME                           HEIGHT     LF_ROWS    DEL_LF_ROW RATIO    
------------------------------ ---------- ---------- ---------- -------
IDX_NAME                               1          127         0        0
 
1 row selected.
Examining the INDEX_STATS table shows that the 62 deleted leaf rows were dropped from the index. Notice that the total number of leaf rows went from 189 to 127, which is a difference of 62 leaf rows (189-127). This index should provide better performance for the application.

From--Mayilselvan Subramanian

Friday, January 6, 2017

RMAN Backup error: What to do if some archivelog is missing or datafile is offline during RMAN backup?

I always encounters the problem while my seminars that what DBA should do if some archivelog is missing during RMAN backup.
I think apart from crossing the fingers, one can at least make successful backup. backup should not be missed even if you miss some of the files. Nothing is important than successful backup. All future recovery depends upon the presence of backups.
RMAN> crosscheck archivelog all;
and then follwoing to delete expired archivelog.
RMAN> delete expired archivelog all;
Also refer to the skip inaccessible clause of the backup command. for example...
RMAN> backup archivelog all skip inaccessible delete input;
RMAN> change archivelog all validate;

because without it "skip inaccessible" will be needed any time You will start backup of archs.
Moreover, it can be done with following way also:
for the missing archive logs... u can do the following
connect target /
connect catalog rman/pwd@rmancatalog
run {
change archivelog from logseq = xxx until logseq =
yyy unavailable;
}
xxx and yyy are the logseg numbers, the one's you are missing.
RMAN> RUN
{
SET MAXCORRUPT FOR DATAFILE 1 TO 10;
BACKUP DATABASE
SKIP INACCESSIBLE
SKIP READONLY
SKIP OFFLINE;
}
To back up the database while skipping offline and read-only tablespaces, you can run the following command:
RMAN> BACKUP DATABASE
SKIP READONLY
SKIP OFFLINE;
Above discussion include other file types also if they are not available during backup and backup is generating exception.

When to rebuild index ?

Index is one of intersting objects in database, which always attract DBAs and Developers to fine tune their queries. Because index is just like salt in food. It should be used optimized quantity. Some time index doesnt show actual expected performance. In those cases we need to chaeck whether index need to be rebuild or not.
I went through certain research docs which shows certain criteria for rebuilding index:
The number of deleted leaf nodes - the number of index nodes that have been logically deleted as a result of row deletes. Remember that Oracle leaves "dead" index nodes in the index when rows are deleted. This is done to speed up SQL deletes, since Oracle does not have to rebalance the index tree when rows are deleted.
Index height - the number of levels that are spawned by the index as a result in row inserts. When a large amount of rows are added to a table, Oracle may spawn additional levels of an index to accommodate the new rows. Hence, an Oracle index may have four levels, but only in those areas of the index tree where the massive inserts have occurred. Oracle indexes can support many millions of entries in three levels
Gets per index access - the amount of logical I/O that is required to fetch a row with the index. As you may know, a logical "get" is not necessarily a physical I/O since much of the index may reside in the Oracle buffer cache. Unfortunately, Oracle does not make it easy to capture this information. In Oracle we must issue these commands:
ANALYZE INDEX index_name COMPUTE STATISTICS 
ANALYZE INDEX index_name VALIDATE STRUCTURE 
After you analyze the report above, you may want to consider rebuilding any index where the height is more than three levels, since three levels will support millions of index entries.
As you may know, you can easily rebuild an Oracle index with the command:
ALTER INDEX index_name REBUILD
Seeking to other docs I found there are some myths about it. I am a big fan of richard foote concept on oracle database specially about indexes. he has broken some myth about oracle internals. Following some example is based on his some of documents.
Is deleted leaf blocks are reused? Yes. but depends upon how soon data will be reinserted and while B-Tree will balance the tree will reuse it.
SQL> create table test_empty_block (id number, value varchar2(10));
Table created.
SQL> begin
2 for i in 1..10000 loop
3 insert into test_empty_block values (i, 'Bowie');
4 end loop;
5 commit;
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> create index test_empty_block_idx on test_empty_block (id);
Index created.



SQL> delete test_empty_block where id between 1 and 9990;
9990 rows deleted.
SQL> commit;
Commit complete.
SQL> analyze index test_empty_block_idx validate structure;
Index analyzed.
SQL> select lf_blks, del_lf_rows from index_stats;
LF_BLKS DEL_LF_ROWS
---------- -----------
21             9990



Now reinsert a similar volume but after the last current values
SQL> begin
2 for i in 20000..30000 loop
3 insert into test_empty_block values (i, 'Bowie');
4 end loop;
5 commit;
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> analyze index test_empty_block_idx validate structure;
Index analyzed.
SQL> select lf_blks, del_lf_rows from index_stats;
LF_BLKS DEL_LF_ROWS
---------- -----------
21             0
Note all empty blocks have been reused and deleted rows cleanout.
Following select statement was executed after the 9990 deletions in previous example
SQL> select /*+ index test_empty_blocks */ * from test_empty_blocks
where id between 1 and 100000;
10 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_EMPTY_BLOCKS'
2 1 INDEX (RANGE SCAN) OF 'TEST_EMPTY_BLOCKS_IDX' (NON-UNIQUE)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
28 consistent gets
0 physical reads
0 redo size
549 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10 rows processed

  • Deleted space is cleaned out by subsequent writes

  • Deleted space is cleaned out by delayed block cleanout

  • Fully emptied blocks are placed on freelist and recycled (although remain in the index structure)

  • Suggestions that deleted space can never be reused are wrong and yet another silly myth

Conditions for Rebuilds
Large free space (generally 50%+), which indexes rarely reach, and
Large selectivity, which most index accesses never reach, and
Response times are adversely affected, which rarely are.
Note requirement of some free space anyways to avoid insert and
subsequent free space issues
Benefit of rebuild based on various dependencies which include:
- Size of index
- Clustering Factor
- Caching characteristics
- Frequency of index accesses
- Selectivity (cardinality) of index accesses
- Range of selectivity (random or specific range)
- Efficiency of dependent SQL
- Fragmentation characteristics (does it effect portion of index frequently used)
- I/O characteristics of index (serve contention or I/O bottlenecks)
- The list goes on and on ....
Myths:

  • The vast majority of indexes do not require rebuilding

  • Oracle B-tree indexes can become "unbalanced" and need to be rebuilt is a myth

  • Deleted space in an index is "deadwood" and over time requires the index to be rebuilt is a myth

  • If an index reaches "x" number of levels, it becomes inefficient and requires the index to be rebuilt is a myth

  • If an index has a poor clustering factor, the index needs to be rebuilt is a myth

  • To improve performance, indexes need to be regularly rebuilt is a myth

I think above discussion will invoke us to test more results in quantitative analysis. I have given enough approach for it.
Test & Enjoy!!

Wednesday, January 4, 2017

Oracle : Index Fragmentation

This script provides information critical in determining whether an  index is a candidate for rebuilding.  An index is a candidate for 
rebuilding when a relatively high number of index leaf row deletes have  occured.  

set verify off  
def ownr  = &&1  
def name  = &&2  
  
ttitle - center 'Index Fragmentation Statistic'   skip 2 
  
set heading off  
  
col name                 newline  
col lf_blk_rows          newline  
col del_lf_rows          newline  
col ibadness newline   
  
validate index &ownr..&name;  
  
select  
  'index name        '||name,  
  'leaf rows deleted '||to_char(del_lf_rows,'999,999,990')  del_lf_rows,  
  'leaf rows in use  '||to_char(lf_rows-del_lf_rows,'999,999,990')  lf_blk_rows,   
  'index badness     '||to_char(del_lf_rows/(lf_rows+0.00001),'999,990.999') ibadness  
from  
  index_stats  
/  
  
undef ownr  
undef name  
set verify on

Table Fragmentation in Oracle Database - to get performance benifit

In Oracle schema there are tables which has huge difference in actual size (size from User_segments) and expected size from user_tables (Num_rows*avg_row_length (in bytes)). This all is due to fragmentation in the table or stats for table are not updated into user_tables.

We will discuss:
1) What is Table Fragmentation?
2) How to understand HWM (High Water Mark) in table?
3) What are the reasons to reorganization of table?
4) How to find most fragmented tables?
5) How to reset HWM / remove fragmentation?
6) How to get more performance benefits from most fragmented tables?
7) Demo

1) What is Table Fragmentation?

If a table is only subject to inserts, there will not be any fragmentation. Fragmentation comes with when we update/delete data in table. The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get  reuse ever at all). This leaves behind holes in table which results in table fragmentation.

To understand it more clearly, we need to be clear on how oracle manages space for tables.

When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

Note that table fragmentation is different from file fragmentation. When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.(High Water Mark).

2) How to understand HWM (High Water Mark) in table?

HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark when doing a full table scan.

DDL statement always resets the HWM.

3) What are the reasons to reorganization of table?

a) Slower response time (from that table)
b) High number of chained (actually migrated) rows. 
c) Table has grown many folds and the old space is not getting reused.

Note: Index based queries may not get that much benefited by reorg as compared to queries which does Full table 
scan.

4) How to find most fragmented tables?

In Oracle schema there are tables which has huge difference in actual size (size from User_segments) and expected size from user_tables (Num_rows*avg_row_length (in bytes)). This all is due to fragmentation in the table or stats for table are not updated into dba_tables.

5) What actions to be taken on most fragmented tables?

Steps to Check and Remove Table Fragmentation:- 

i)  Gather table stats:

To check exact difference in table actual size (dba_segments) and stats size (dba_tables). The difference between these value will report actual fragmentation to DBA. So, We have to have updated stats on the table stored in dba_tables. Check LAST_ANALYZED value for table in dba_tables. If this value is recent you can skip this step. Other wise i would suggest to gather table stats to get updated stats.

exec dbms_stats.gather_table_stats('&schema_name','&table_name');

ii) Check Table size:

Now again check table size using and will find reduced size of the table.

select table_name,bytes/(1024*1024*1024) from dba_table where table_name='&table_name';  -- keep a track to match after fragmentation 

iii) Check for Fragmentation in table:

Below query will show the total size of table with fragmentation, expected without fragmentation and how much % of size we can reclaim after removing table fragmentation. Database Administrator has to provide table_name and schema_name as input to this query.

SQL>
set pages 50000 lines 32767;
select owner,
       table_name,
       round((blocks * 8), 2) || 'kb' "Fragmented size",
       round((num_rows * avg_row_len / 1024), 2) || 'kb' "Actual size",
       round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2) || 'kb',
       ((round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2)) /
       round((blocks * 8), 2)) * 100 - 10 "reclaimable space % "
  from dba_tables
 where table_name = '&table_Name'
   AND OWNER LIKE '&schema_name';
/

Note: This query fetch data from dba_tables, so the accuracy of result depends on dba_table stats.
To find Top 10 fragmentation tables
SQL>
select *
      from (select table_name,
               round((blocks * 8), 2) "size (kb)",
               round((num_rows * avg_row_len / 1024), 2) "actual_data (kb)",
               (round((blocks * 8), 2) -
               round((num_rows * avg_row_len / 1024), 2)) "wasted_space (kb)"
          from dba_tables
         where (round((blocks * 8), 2) >
               round((num_rows * avg_row_len / 1024), 2))
         order by 4 desc)
 WHERE ROWNUM <= 10;

If you find reclaimable space % value more than 20% then we can expect fragmentation in the table.

 Suppose, DBA find 50% reclaimable space by above query, So he can proceed for removing fragmentation.

5) How to reset HWM / remove fragmentation?

We have three options to reorganize fragmented tables:

1. Alter table move (to another tablespace, or same tablespace) and rebuild indexes:- 
   (Depends upon the free space available in the tablespace)  
2. Export and import the table:- (difficult to implement in production environment)
3. Shrink command (from Oracle 10g onwards)
   (Shrink command is only applicable for tables which are tablespace with auto segment space management)

Here, I am following Options 1 and 3 option by keeping table availability in mind. 

Option: 1 

Alter table move (to another tablespace, or same tablespace) and rebuild indexes:-
Collect status of all the indexes on the table:-

We will record Index status at one place, So that we get back them after completion of this exercise,  

SQL> select index_name,status from dba_indexes 
where table_name like '&table_name';

Move table in to same or new tablespace:
---------------------------------------
In this step we will move fragmented table to same tablespace or from one tablespace to another tablespace to reclaim fragmented space. Find Current size of you table from dba_segments and check if same or any other tablespace has same free space available. So, that we can move this table to same or new tablespace.

Steps to Move table in to same tablespace:

-----------------------------------------
alter table <table_name> move;   ------> Move to same tablespace

OR

Steps to Move table in to new tablespace:
----------------------------------------
alter table <table_name> enable row movement;
alter table <table_name> move tablespace <new_tablespace_name>;

Now, get back table to old tablespaces using below command

alter table table_name move tablespace old_tablespace_name;

Now,Rebuild all indexes:
-----------------------
We need to rebuild all the indexes on the table because of move command all the index goes into unusable state.

SQL> select status,index_name from dba_indexes where table_name = '&table_name';

STATUS INDEX_NAME
-------- ------------------------------
UNUSABLE INDEX_NAME                            -------> Here, value in status field may be valid or unusable.

SQL> alter index <INDEX_NAME> rebuild online;  -------> Use this command for each index
Index altered.

SQL> select status,index_name from dba_indexes where table_name = '&table_name';

STATUS INDEX_NAME
-------- ------------------------------
VALID INDEX_NAME                               -------> Here, value in status field must be valid.

After completing these steps, table statistics must be gathered.

Option: 2 Export and import the table:-

Click here to read from my posts

Option: 3 Shrink command (from Oracle 10g onwards):-
------------------------------------------

Shrink command: 
--------------
Its a new 10g feature to shrink (reorg) the tables (almost) online which can be used with automatic segment space 
management.

This command is only applicable for tables which are tablespace with auto segment space management.

Before using this command, you should have row movement enabled.

SQL> alter table <table_name> enable row movement;
Table altered.

There are 2 ways of using this command.

1. Rearrange rows and reset the HWM:
-----------------------------------
Part 1: Rearrange (All DML's can happen during this time)

SQL> alter table <table_name> shrink space compact;
Table altered.

Part 2: Reset HWM (No DML can happen. but this is fairly quick, infact goes unnoticed.)

SQL> alter table <table_name> shrink space;
Table altered.

2. Directly reset the HWM:
-------------------------
(Both rearrange and restting HWM happens in one statement)
SQL> alter table <table_name> shrink space; 
Table altered.

Advantages over the conventional methods are:
--------------------------------------------
1. Unlike "alter table move ..",indexes are not in UNUSABLE state.After shrink command,indexes are updated also.
2. Its an online operation, So you dont need downtime to do this reorg.
3. It doesnot require any extra space for the process to complete.

After completing these steps, table statistics must be gathered.

6) How to get more performance benefits from most fragmented tables?

After doing above steps, you must gather statistics to tell optimizer to create best execution plan for better performance during query execution. Here I have given some auto sampling method to gather stats. Most of cases I got performance benefits when I did auto sampling method.

Gather table stats:
------------------
SQL> exec dbms_stats.gather_table_stats('&owner_name','&table_name');
PL/SQL procedure successfully completed.
OR
SQL> exec dbms_stats.gather_table_stats('&owner_name', '&table_name', estimate_percent => dbms_stats.auto_sample_size);
OR 
SQL> exec dbms_stats.gather_table_stats(ownname=>'&owner_name',
tabname=>'&table_name',estimate_percent => 100, cascade=>true, method_opt=>'for all columns size AUTO'); 

–- For entire schema
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS('&schema_name',DBMS_STATS.AUTO_SAMPLE_SIZE);

Check Table size:
-----------------
Now again check table size using and will find reduced size of the table.

SQL> select table_name,bytes/(1024*1024*1024) from dba_table where table_name='&table_name';

Now match with your earlier track, You must have some benefits. Sure performance will improve.

7) Demonstration:

Here I ll show you one demo activity. But when you will do, you first complete in your pre-prod database and collect performance statistics before and after. Then based on benefit, you can plan for production.

Demo:
1) Take all invalid objects counts for whole database as wel as applied schema

select count(1) from dba_objects where status='INVALID' -- 2386

select count(1) from dba_objects where status='INVALID' and owner='CUSTOMER' -- 0

2) Take Top list ( preferably 10) of tables for fragmenation

select *
  from (select table_name,
               round((blocks * 8), 2) "size (kb)",
               round((num_rows * avg_row_len / 1024), 2) "actual_data (kb)",
               (round((blocks * 8), 2) -
               round((num_rows * avg_row_len / 1024), 2)) "wasted_space (kb)"
          from dba_tables
         where (round((blocks * 8), 2) >
               round((num_rows * avg_row_len / 1024), 2))
         order by 4 desc)
 WHERE ROWNUM <= 10;

Output:

TABLE_NAME                      size (kb) actual_data (kb) wasted_space (kb)
------------------------------ ---------- ---------------- -----------------
CUSTOMER_SERVICES_DTLS       12382432      10341757.49       2040674.51
PKG_ACTUAL_AVAILABLE           7291976       5736686.1         1555289.9
PROCESSED_TRAN                  1601072       367932.44         1233139.56
PROCESSED_CUURENCY              1314672       145479.1          1169192.9
ACTUAL_SERVICES_DTLS            7452568       6332113.25        1120454.75
SERVICEREQUESTDETAILS           3037840       1932758.36        1105081.64
PKG_RESULTREPORTDTLS            1436632       440030.4          996601.6
BATCH_TXN_SERIALITEM            2621128       1820127.37        801000.63
CUSTOMER_BILLDETAILS            233616        1451156.52        782459.48


10 rows selected

-- Find size
SQL> select segment_name, bytes/(1024*1024*1024) "Size GB" from
  2  dba_segments where segment_name='CUSTOMER_SERVICES_DTLS';
  

SEGMENT_NAME                           Size GB
-------------------------------------- ----------
CUSTOMER_SERVICES_DTLS             11.828125

SQL> 

3) Take one sample table. Here we ll take "CUSTOMER_SERVICES_DTLS". Find the owner.

SQL>
select owner,table_name,tablespace_name,num_rows,blocks
from dba_tables where table_name='CUSTOMER_SERVICES_DTLS';

output:

OWNER      TABLE_NAME                     TABLESPACE_NAME    NUM_ROWS     BLOCKS
---------------------------- ------------------------------ ---------- ----------
CUSTOMER      CUSTOMER_SERVICES_DTLS     CUSTOMER              74055662    1542825

4) Do below activities for safe purpose:

a) Take DDL

-- Create table
create table CUSTOMER.CUSTOMER_SERVICES_DTLS
(
 xxxxxx
) tablespace CUSTOMER;
--Create/Recreate indexes 
create index CUSTOMER.INDX_TXNID on CUSTOMER.CUSTOMER_SERVICES_DTLS (TXNID)
  tablespace CUSTOMER;
create index CUSTOMER.INDX_SYSTEMUPDATEDDATE on CUSTOMER.CUSTOMER_SERVICES_DTLS (SYSTEMUPDATEDDATE)
  tablespace CUSTOMER;

b) take logical backup using expdp:

expdp directory=data_pump dumpfile=CUSTOMER_SERVICES_DTLS.dmp logfile=CUSTOMER_SERVICES_DTLS.log tables=CUSTOMER.CUSTOMER_SERVICES_DTLS exclude=statistics

SQL> 

5) Verify all index status

SQL> select index_name,status
  2  from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';

INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             VALID
INDX_SYSTEMUPDATEDDATE          VALID

SQL> 

6) Now move the table:
SQL> connect / as sysdba
SQL> set timing on;
SQL> alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move;

Table altered.

Elapsed: 00:11:12.18
SQL> 

(Note: Based of table size, more archivelogs will be generated. You must have sufficient space on required tablespace/ datafile, including TEMP tablespace)

7) Now again verify these:
a) No. of records
SQL> select count(rowid) from CUSTOMER.CUSTOMER_SERVICES_DTLS;

COUNT(ROWID)
------------
    74055662

SQL> 

b) Index statistics

SQL> select index_name,status
  2  from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';

INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             UNUSABLE
INDX_SYSTEMUPDATEDDATE          UNUSABLE

SQL> 

Here Indexes are "Unusable" status. So these must be rebuild.

8) Rebuild the Indexes

SQL> alter index CUSTOMER.INDX_TXNID rebuild online;

Index altered.
SQL> alter index CUSTOMER.INDX_SYSTEMUPDATEDDATE rebuild online;

Index altered.
SQL> 

Now check the index stats

SQL> select index_name,status from dba_indexes where table_name='CUSTOMER_SERVICES_DTLS';
INDEX_NAME                     STATUS
------------------------------ --------
INDX_TXNID             VALID
INDX_SYSTEMUPDATEDDATE          VALID

SQL> 

Now all are valid.

9) Now Chek no. of rows and blocks

SQL>
select owner,table_name,tablespace_name,num_rows,blocks
from dba_tables where table_name='CUSTOMER_SERVICES_DTLS';

output:

OWNER      TABLE_NAME                     TABLESPACE_NAME    NUM_ROWS     BLOCKS
---------------------------- ------------------------------ ---------- ----------
CUSTOMER      CUSTOMER_SERVICES_DTLS     CUSTOMER              74055662    151033

See here no. of blocks reduced.

10) Now Gather table stats:

SQL> exec dbms_stats.gather_table_stats(ownname=>'CUSTOMER',tabname=>'CUSTOMER_SERVICES_DTLS',estimate_percent => 100, cascade=>true, method_opt=>'for all columns size AUTO'); 


11) Check Table size:

Now again check table size using and will find reduced size of the table.

SQL> 
SQL> select segment_name, bytes/(1024*1024*1024) "Size GB" from
  2  dba_segments where segment_name='CUSTOMER_SERVICES_DTLS';
  

SEGMENT_NAME                           Size GB
-------------------------------------- ----------
CUSTOMER_SERVICES_DTLS             10.02131

SQL> 


Here table size reduced also.

12) Now crosscheck all valid/ invalid object counts and match. You can release your database if you have taken downtime.


Issues may come during the above activity:

SQL> alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move; 
alter table CUSTOMER.CUSTOMER_SERVICES_DTLS move
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 8192 in tablespace CUSTOMER


i.e., you don't have sufficient space on required tablespace and temp tablespace also.

So add more datafiles and tempfles if your datafiles and tempfile reached 32G.