Quantcast
Channel: Oracle DBA - RajaBaskar Blog
Viewing all 25 articles
Browse latest View live

Oracle 11g Compression feature- Test case

$
0
0


Recently I did some test case in oracle 11g compression on 11.1.0.7 windows OS. I would like to share with you all

Snippet on Compression:

Oracle Table compression for OLTP Operations:

1.     Unique Oracle compression algorithm works by eliminating duplicate values within a database blocks across multiple columns
2.     Compressed blocks contain a structure called a symbol table that maintains compression metadata.
3.     When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table.
4.     Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table.
5.     Oracle table compression doesn’t support more than 255 columns or LONG data type columns




Benefits:

·         Up to 3X to 4X storage savings based on nature of data compressing. (Test Table result 4:1 ratio)
·         Less I/O – Tables resides on less number of blocks, so fast full table scans/index range scans can retrieve the rows with fewer disks I/O.
·         Buffer cache efficiency – Due to less block reads for data (less logical and Physical I/O)
·         Transaction query efficiency – while running the transaction query(SELECT), Oracle reads the compressed block without having to first uncompress the block. So there is no performance degrading of transaction query.

We have minimum performance overhead (CPU Overhead) during write operation (DML) on compressed table.

######################### One more Sample test

Note: we have flushed the database buffer cache and Result cache for each and every test

##### Create 2 tables - Compressed table and uncompressed table #####





SQL> create table Arul.AM_UNCOMPRESS as select * from AM_TESTTABLE where 1=2;

Table created.

SQL>  create table Arul.AM_COMPRESS as select * from AM_TESTTABLE where 1=2;

Table created.


##### Change the normal table to Compress table and insert some records #####

SQL> alter table Arul.AM_COMPRESS compress for all operations;

Table altered.

SQL> insert into Arul.AM_UNCOMPRESS (select * from AM_TESTTABLE);

61487 rows created.


SQL>  insert into Arul.AM_COMPRESS (select * from AM_TESTTABLE);

61487 rows created.

SQL> commit;

Commit complete.


#####  insert some records into uncompress table #####


SQL>  insert into Arul.AM_UNCOMPRESS (select * from AM_TESTTABLE);

61487 rows created.

SQL>  insert into Arul.AM_COMPRESS (select * from AM_TESTTABLE);

61487 rows created.


SQL> commit;

Commit complete.


#####  Table Compress status #####

OWNER                          TABLE_NAME                     COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
RAJA                        AM_UNCOMPRESS                  DISABLED
RAJA                        AM_COMPRESS                    ENABLED  OLTP



-- Compress and Uncompress table segment size and having block size

OWNER                SEGMENT_NAME             BLOCKS SUM(BYTES/1024/1024)
-------------------- -------------------- ---------- --------------------
RAJA              AM_COMPRESS                 320                    5
RAJA              AM_UNCOMPRESS               896                   14


Comments:

1.    Total number of blocks is very less in compressed table compare  to uncompressed table. (around 1:3 ratio)
2.    Segment size is also very less in Compressed table.  (1:3 ratio)


#####  Elapse time for select query against Compress and uncompress tables #####


SQL> select * from ARUL.AM_unCOMPRESS where object_id < 1000;

1860 rows selected.

Elapsed: 00:00:00.62


SQL> select * from ARUL.AM_COMPRESS where object_id < 1000;

1860 rows selected.

Elapsed: 00:00:00.28


Comments:

v  Elapsed time is very less in compressed table compare as uncompressed table. (around 1:2 ratio)


#####  Statistics for select query against Compressed and uncompressed tables #####


--UNCOMPRESS Table

16:17:51 SQL> select * from ARUL.AM_unCOMPRESS where object_id < 1000;

1860 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3732907867

-----------------------------------------------------------------------------------
| Id  | Operation         | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |               |    10 |  2070 |   206   (0)| 01:54:56 |
|*  1 |  TABLE ACCESS FULL| AM_UNCOMPRESS |    10 |  2070 |   206   (0)| 01:54:56 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"<1000)

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          5  recursive calls
          0  db block gets
       1073  consistent gets
        878  physical reads
          0  redo size
      95285  bytes sent via SQL*Net to client
       1873  bytes received via SQL*Net from client
        125  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1860  rows processed


--COMPRESS Table


16:18:24 SQL> select * from ARUL.AM_COMPRESS where object_id < 1000;

1860 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 1321419037

---------------------------------------------------------------------------------
| Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |             |  2603 |   526K|    74   (0)| 00:41:18 |
|*  1 |  TABLE ACCESS FULL| AM_COMPRESS |  2603 |   526K|    74   (0)| 00:41:18 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"<1000)

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          5  recursive calls
          0  db block gets
        507  consistent gets
        311  physical reads
          0  redo size
      95192  bytes sent via SQL*Net to client
       1873  bytes received via SQL*Net from client
        125  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1860  rows processed


Comments:

v  Physical reads is very less while accessing the compressed table compare as uncompressed table.
v  Cost and elapsed time also very less compare as uncompressed table.



################ DML TEST CASE for INSERT statement  ####################################################

##### Elapse time for insert query against Compressed and uncompressed tables #####

16:21:06 SQL> insert into Arul.AM_UNCOMPRESS (select * from Arul.AM_UNCOMPRESS);

122974 rows created.

Elapsed: 00:00:00.56


16:22:29 SQL> insert into Arul.AM_COMPRESS (select * from Arul.AM_UNCOMPRESS);

122974 rows created.

Elapsed: 00:00:01.80


Comments:

v  For DML Operations, While insert into uncompress table is performing better than compressed table.


################### DML TEST CASE for UPDATE Statement  ####################################################


##### Elapse time for Update query against Compressed and uncompressed tables #####


16:31:51 SQL> update Arul.AM_COMPRESS set owner='AM' where owner='SYS';

248200 rows updated.

Elapsed: 00:00:13.60

16:32:12 SQL> commit;

Commit complete.


16:32:20 SQL> update Arul.AM_UNCOMPRESS set owner='AM' where owner='SYS';

248200 rows updated.

Elapsed: 00:00:03.31

16:32:35 SQL> commit;

Commit complete.

Comments:

v  DML Operations on  uncompress ed  table is performing better than  on compressed table (Based on Elapsed time)


Reference: Don Burleson blog and other few sites.





Script to Collect DB Upgrade/Migrate Diagnostic - dbupgdiag.sql

$
0
0

Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) [ID 556610.1]

Recently I have found metalink note and it may be useful for database upgrade.
This upgrade diag scripts will helps to identify whether we upgraded the database successfully or not.

We can run the scripts before and after upgrade the database.

Script Compatibility:

Oracle Server Enterprise Edition - Version: 9.2.0.1 to 11.2.0.3

Download the script from metalink:

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=556610.1

Thanks !!!



db_file_multiblock_read_count in oracle 11g

$
0
0









Whenever we have faced performance issue, few questions suddenly came to our mind like …



Why my index is not being used?

Why my query is going to full table scan instead of index scan?

Is there any fragmentation on segments?

Is there any stale statistics?

Any volume change?

Is there any recent parameter change in database level?

Why I/O performance is very slow?



Yesterday I have found few good articles about MBRC. I just want summarize and share to you all.

 DB_MULTIBLOCK_READ_COUNT

Oracle database improves the performance of table scans by increasing the number of blocks read in a single I/O Operation. If the SQL statement is accessing all the records in a table (full table scan) , it returns many blocks in Single I/O read is better.

Prior to Oracle 10gR2, MBRC parameter decides, how many blocks can oracle fetch from disk to buffer cache in single I/O read?

Oracle 10gR2 and 11g - If we are not explicitly set MBRC parameter; oracle automatically decides the value for MBRC parameter depending on the operating system optimal I/O size and buffer cache size.

If we set MBRC value is HIGH, what will be happen?

MBRC value is too high, it will affect the query access path selection (Query execution plan - oracle optimizer will use the full table scan instead of index scan. oracle optimizer is not being used INDEXES)

OLTP/Batch Environment: we can set MBRC value is 4 to 16.

Decision support system/Data warehousing system: We can set the MBRC value is 1 MB/DB_BLOCK_SIZE in instance or session level. Most of the DW queries are going FTS and its better to use FTS.

How internally works?

MBRC, DB_BLOCK_SIZE and SSTIOMAX having relationship and these parameters decide the I/O Performance of oracle.

SSTIOMAX – This one is oracle internal parameter and cannot be changed this parameter. This parameter can vary on oracle version.

 SSTIOMAX parameter value can decides, maximum amount of data transfer in a single IO of a read or write operation.



SSTIOMAX Value:

 1)     Oracle 7.3 = 128 K

2)     Oracle 8.0.5 = 1 MB

3)     Upto Oracle 10.2.0.5 = 1 MB (Solaris SPARC 64 bit/Linux)

         Oracle 10g R2 onwards, oracle automatically tuned this parameter. If we are not set MBRC parameter, oracle will maximum use 1 MB for single I/O read/write operations.

            4)     Oracle 11gR2 = 32 MB ( Not yet confirmed)                               

                                              SSTIOMAX =< db_block_size * db_file_multiblock_read_count



MBRC default value is 8 and we can change this parameter both Instance/Session level.

In my database having db_file_multiblock_read_count =32. It means server process can fetch 32 blocks from disk to buffer cache on single I/O? NO
 

how calculate the MBRC value?

DB_FILE_MULTIBLOCK_READ_COUNT parameter works based on DB block size and OS level I/O size.

 

Scenario: 1
 DB_BLOCKS_SIZE= 8K

Tablespace Block size = 8K

DB_FILE_MULTIBLOCK_READ_COUNT  =32

How much I/O blocks can server process fetch at a time? 

I/O= (DB_BLOCKS_SIZE) * (DB_FILE_MULTIBLOCK_READ_COUNT ) = 8 * 32 = 256 KB I/O

Note: Both DB Block and tablespace block size are same.

Scenario: 2
 DB_BLOCKS_SIZE= 8K

Tablespace Block size = 4K

DB_FILE_MULTIBLOCK_READ_COUNT  =32

How much I/O blocks can server process fetch at a time? 

= ((DB_BLOCKS_SIZE) * (DB_FILE_MULTIBLOCK_READ_COUNT ))/Tablespace Block size =( 8 * 32)/4 = 64 KB I/O

Note: Both DB Block and tablespace block size are different

Scenario: 3
DB_BLOCKS_SIZE= 8K

Tablespace Block size = 8K

DB_FILE_MULTIBLOCK_READ_COUNT  =8  (Default value- Not explicitly set)

Oracle 10gR2 – OS I/O optimal size is 1 MB


How automatically calculate the MBRC value in 10gR2 or later release?

     SSTIOMAX =< db_block_size * db_file_multiblock_read_count 


      1 MB = 8 * MBRC

      MBRC = 1024 K (OS level I/O optimal size) / 8 K ( block size)

      MBRC = 128 blocks  (Oracle can fetch 128 blocks for single I/O read)
  
How to reset the MBRC parameter to default value? 

1) alter system reset db_file_multiblock_read_count scope=spfile sid='*';

2) Restart the instance.

 Reference:


www.asktom.com                    

 Thanks !!!




DB_Stats - dbms_stats.set_table_stats

$
0
0

Recently I had faced one performance issue related to DB statistics.
Suddenly we have got CPU spike on database and found a query causing the CPU spike.

Finally I started to Investigate.

This query have joined three tables and all the tables have a partition

  • All the tables have statistics Upto date and weekend job gathered the stats.
  • No Fragmentation/No execution plan changed.
  • No blocking locks/No concurrent access on these tables.

Finally I got the issue,

This query is accessing the current partition data (Daily Partition) for every 20 minutes through job. 

The daily partition loading the data and at the same time transaction query can try to access the real time data.

Normally current daily partition doesn’t have any statistics, at the end of day/weekend gathered the statistics based on Application needs. Even if gathered the stats before they days starts it will be “ZERO” rows doesn’t help to optimizer to improve the query performance.

Already I had faced the same stats issue on partition few years ago, so i have copied the stats from existing partition (good stats) to current partition and fixed the issue.

 Oops …But today issue was happened on oracle 9i.
 There is No DBMS_STATS.COPY_TABLE_STATS feature in oracle 9i.    
But We can manually set the STATS  on tables/partitions using below procedure. 
Syntax

DBMS_STATS.SET_TABLE_STATS (
   ownname       VARCHAR2, 
   tabname       VARCHAR2, 
   partname      VARCHAR2 DEFAULT NULL,
   stattab       VARCHAR2 DEFAULT NULL, 
   statid        VARCHAR2 DEFAULT NULL,
   numrows       NUMBER   DEFAULT NULL, 
   numblks       NUMBER   DEFAULT NULL,
   avgrlen       NUMBER   DEFAULT NULL, 
   flags         NUMBER   DEFAULT NULL,
   statown       VARCHAR2 DEFAULT NULL,
   no_invalidate BOOLEAN  DEFAULT to_no_invalidate_type (
                                     get_param('NO_INVALIDATE')),
   cachedblk     NUMBER    DEFAULT NULL,
   cachehit      NUMBER    DEFUALT NULL,
   force         BOOLEAN   DEFAULT FALSE);
 
 
Finally I had found good stats on one partition and manually set it for current partition using below procedure. 
exec dbms_stats.set_table_stats (ownname => ‘ARUL’, tabname=>’ARUL_TEST’,partname=>’ARULTEST_DP201313’,numrows=> 145667907, numblks=>568066,avgrlen=> 150);

I have scheduled a job after my weekend completed, and its set the stats for next one week partitions and this query is running fine.

Thanks !!!

Oracle Total Recall – Flashback Data Archive - Oracle 11gR2 Feature

$
0
0



 




Oracle Total Recall will help to keep the historical data in database.   Oracle Total Recall tracking the changes happen on database and keep this data’s in database based on our retention period.

Oracle Provides lot of flashback features in earlier version. 

DB version
Flashback Feature


Oracle 9i
Flashback Query
Oracle 10g
Flashback Version Query
Oracle 10g
Flashback Table
Oracle 10g
Flashback Database
Oracle 11g
Flashback Data Archive/Oracle Total Recall
Oracle 11g
Flashback Transaction Back out

Oracle Database 11g introduces Total Recall based on the Flashback Data Archive feature, which transparently tracks changes to database tables in a highly secure and efficient manner. How Total Recall is different from others. Let’s see in this article.

Already we have flashback features, How FDA is better than existing flashback feature?

Flashback query provides the old data’s from undo tablespace and Flashback based on below init parameters.

UNDO_RETENTION
DB_RECOVERY_FILE_DEST_SIZE
DB_RECOVERY_FILE_DEST

Suppose flashback retention is 24 hours and flashback  keeps 1 day flashback logs, if we need last 48 hrs old data, we don’t get the old data and received the below error. FDA will provide the solution to access the old data based on FDA table retention.

SQL> SELECT … AS OF TIMESTAMP …

ORA-01466: unable to read data - table definition has changed
OR

ORA-08180: no snapshot found based on specified time
OR

ORA-01555: snapshot too old: rollback segment number 7 with
name "_SYSSMU7$“too small


How we are tracking the Historical Data now?

1.     Application Level: This one is very complex and keep the historical data with data integrity is very difficult. Based on business requirements they are using to track the historical data. As my little knowledge rarely using this approach in some environment.

2.     Database Level:

Database Triggers:

It will help to track the db changes and keep the changes on other change tables. Here we have maintain the data’s using partitions very easily and purge the old data’ from change tables based on our retention. These database triggers will impact the database performance. Oracle privileged users able to change the historical data.

Redo Log Mining:

Extract the redo log on readable format, create and stores this data using third party tool /Oracle log miner is very difficult and managing the historical data is also very tough.

Flashback Data Archive or Oracle Total Recall:

Flashback data archive provides the complete solution for managing the historical data.

Data Retention: FDA can be enabled on all tables and there is no limit to keep the data’s and retains the data based on business requirement.  These data’s are store in tablespace.  When a record exceeds the retention period, it is automatically purged the retention records from all FDA tables.

Easy to Implement:We can enable the Flashback Data Archive for one or more tables at any time and there is no need any application or database outage.

Easy to Access: We can access the historical data using FLASHBACK SQL query at any point of time and specify the specific interval also. 

Storage Maintenance:FDA automatically compresses and partitions the FDA internal history tables to optimize the storage and performance.

Centralized Management: FDA provides a centralized and policy based process is help to easily group tables and set a common retention policy for all group.

Security: FDA internal history tables are read only tables. Oracle privileged users also don’t allowed to do DML operations against the FDA history tables.

Flashback Data Archive Architecture:

If FDA enabled on table, all transaction on the table and respective undo records are marked as archived. The FBDA new background processes sleeps and wakes up at self tuned intervals (default is 5 minutes) and processes the undo data marked for archival.

If FBDA background process and slaves are too busy, archiving may be performed inline, which significantly affects the users response time. 
  



 

























FDA Supports

Oracle 11g Release 1 supports add column DDL operations.

Oracle 11g Release 2 supports below DDL operations.

1.     Adds, Drops, Renames and Modifies a column
2.     Adds, Drops, Renames a constraint
3.     Drops & Truncates a Partition & Sub partitions
4.     Rename, Truncate tables
5.     Performs a Partition & Sub partition operations

Unsupported DDL operations


1.     Alter table statement that moves or exchanges a Partition or Subpartition.
2.     Drop table statement

Flashback Data Archive Requirements:
  1. FDA tablespace must be ASSM.
  2. Undo tablespace must be Auto. 

Whether Oracle Total Recall Impact the Database Performance?

FBDA background processes can spawn multiple parallel thread process while DML statement ran against the table and bulk archiving of small transactions. As of now no one report, FDA caused any performance impact on database.

Test Case:

Reduced the undo retention from 900 seconds to 300 seconds

SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1

SQL> alter system set undo_retention= 300
  2  ;

System altered.

SQL> show parameter undo_rete

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_retention                       integer     300

Create a Separate tablespace for FDA

SQL> create tablespace fda_totalrecall
  2  datafile 'C:\APP\RAGA\ORADATA\TROY\fda_totalrecall01.dbf' size 500m;

Tablespace created.

Create Flashback archive and retention period is 2 years

SQL> create flashback archive fda_troy
  2  tablespace fda_totalrecall
  3  retention 2 year;

Flashback archive created.


SQL> select owner_name,flashback_archive_name,retention_in_days,create_time,last_purge_time from dba_flashback_archive;

OWNER_NAME  FLASHBACK_ARCHIVE_NAME    RETENTION_IN_DAYS CREATE_TIME                      LAST_PURGE_TIME
----------- ------------------------- ----------------- -------------------------------- --------------------------------
SYS         FDA_TROY                                730 04-FEB-13 05.45.56.000000000 AM  04-FEB-13 05.45.56.000000000 AM




Create test schema troy

SQL> create user troy
  2  identified by troy
  3  default tablespace USERS
  4  quota unlimited on USERS
  5  temporary tablespace temp;

User created.

SQL> grant connect,resource,dba to troy;

Grant succeeded.

Create test table under troy schema

SQL> conn troy
Enter password:
Connected.

SQL> create table test as select * from dba_objects;

Table created.

Alter the test table in FDA


SQL> alter table test flashback archive FDA_TROY;

Table altered.

SQL> select * from dba_flashback_archive_tables;

TABLE_NAME                     OWNER_NAME  FLASHBACK_ARCHIVE_NAME    ARCHIVE_TABLE_NAME                            STATUS
------------------------------ ----------- ------------------------- ----------------------------------------------------- --------
TEST                           TROY        FDA_TROY                  SYS_FBA_HIST_73104                            ENABLED


SQL> select object_id,owner,object_name,object_type from dba_objects where  object_name='TEST';

 OBJECT_ID OWNER                OBJECT_NAME     OBJECT_TYPE
---------- -------------------- --------------- -------------------
     73104 TROY                 TEST            TABLE


Version Query against undo tablespace

SQL> SELECT object_id,owner,object_name,object_type
  2  FROM TEST
  3  AS OF TIMESTAMP TO_TIMESTAMP('04022013 18:30:21','ddmmyyyy hh24:mi:ss')
  4  WHERE object_id=73104;
FROM TEST
     *
ERROR at line 2:
ORA-08180: no snapshot found based on specified time


SQL>  SELECT object_id,owner,object_name,object_type
  2    FROM TEST
  3    VERSIONS BETWEEN TIMESTAMP
  4    TO_TIMESTAMP('04022013 18:30:21','ddmmyyyy hh24:mi:ss') AND
  5    TO_TIMESTAMP('04022013 18:35:21','ddmmyyyy hh24:mi:ss')
  6    WHERE object_id=73104;
  FROM TEST
       *
ERROR at line 2:
ORA-01466: unable to read data - table definition has changed

SQL> select to_char(sysdate,'ddmmyyyy hh24:mi:ss') ddate from dual;

DDATE
-----------------
04022013 06:49:21

SQL> delete from test;

71772 rows deleted.

SQL> commit;

Commit complete.

SQL> select count(*) from test;

  COUNT(*)
----------
         0


Retrieve the Data from FDA

SQL> select count(*) from test as of timestamp to_timestamp('04022013 06:49:21','ddmmyyyy hh24:mi:ss');

  COUNT(*)
----------
     71772


SQL> select object_id,owner,object_name,object_type from test as of timestamp to_timestamp('04022013 06:49:21','ddmmyyyy hh24:mi:ss') where object_id=73104;

 OBJECT_ID OWNER                OBJECT_NAME     OBJECT_TYPE
---------- -------------------- --------------- -------------------
     73104 TROY                 TEST            TABLE

Administration:

Flashback Archive Administer – New system privilege managing the FDA
Flashback Archive – New object Privilege for enabling flashback data archive.

Reference: Oracle Documentation and Oracle Total Recall white Paper

IHopethisarticlehelpedtoyou. I amexpecting yoursuggestions/feedback.
 




Viewing all 25 articles
Browse latest View live