Recently I did some test case in oracle 11g compression on 11.1.0.7 windows OS. I would like to share with you all
Snippet on Compression:
Oracle Table compression for OLTP Operations:
1. Unique Oracle compression algorithm works by eliminating duplicate values within a database blocks across multiple columns
2. Compressed blocks contain a structure called a symbol table that maintains compression metadata.
3. When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table.
4. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table.
5. Oracle table compression doesn’t support more than 255 columns or LONG data type columns
Benefits:
· Up to 3X to 4X storage savings based on nature of data compressing. (Test Table result 4:1 ratio)
· Less I/O – Tables resides on less number of blocks, so fast full table scans/index range scans can retrieve the rows with fewer disks I/O.
· Buffer cache efficiency – Due to less block reads for data (less logical and Physical I/O)
· Transaction query efficiency – while running the transaction query(SELECT), Oracle reads the compressed block without having to first uncompress the block. So there is no performance degrading of transaction query.
We have minimum performance overhead (CPU Overhead) during write operation (DML) on compressed table.
######################### One more Sample test
Note: we have flushed the database buffer cache and Result cache for each and every test
##### Create 2 tables - Compressed table and uncompressed table #####
SQL> create table Arul.AM_UNCOMPRESS as select * from AM_TESTTABLE where 1=2;
Table created.
SQL> create table Arul.AM_COMPRESS as select * from AM_TESTTABLE where 1=2;
Table created.
##### Change the normal table to Compress table and insert some records #####
SQL> alter table Arul.AM_COMPRESS compress for all operations;
Table altered.
SQL> insert into Arul.AM_UNCOMPRESS (select * from AM_TESTTABLE);
61487 rows created.
SQL> insert into Arul.AM_COMPRESS (select * from AM_TESTTABLE);
61487 rows created.
SQL> commit;
Commit complete.
##### insert some records into uncompress table #####
SQL> insert into Arul.AM_UNCOMPRESS (select * from AM_TESTTABLE);
61487 rows created.
SQL> insert into Arul.AM_COMPRESS (select * from AM_TESTTABLE);
61487 rows created.
SQL> commit;
Commit complete.
##### Table Compress status #####
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
RAJA AM_UNCOMPRESS DISABLED
RAJA AM_COMPRESS ENABLED OLTP
-- Compress and Uncompress table segment size and having block size
OWNER SEGMENT_NAME BLOCKS SUM(BYTES/1024/1024)
-------------------- -------------------- ---------- --------------------
RAJA AM_COMPRESS 320 5
RAJA AM_UNCOMPRESS 896 14
Comments:
1. Total number of blocks is very less in compressed table compare to uncompressed table. (around 1:3 ratio)
2. Segment size is also very less in Compressed table. (1:3 ratio)
##### Elapse time for select query against Compress and uncompress tables #####
SQL> select * from ARUL.AM_unCOMPRESS where object_id < 1000;
1860 rows selected.
Elapsed: 00:00:00.62
SQL> select * from ARUL.AM_COMPRESS where object_id < 1000;
1860 rows selected.
Elapsed: 00:00:00.28
Comments:
v Elapsed time is very less in compressed table compare as uncompressed table. (around 1:2 ratio)
##### Statistics for select query against Compressed and uncompressed tables #####
--UNCOMPRESS Table
16:17:51 SQL> select * from ARUL.AM_unCOMPRESS where object_id < 1000;
1860 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3732907867
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 2070 | 206 (0)| 01:54:56 |
|* 1 | TABLE ACCESS FULL| AM_UNCOMPRESS | 10 | 2070 | 206 (0)| 01:54:56 |
-----------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OBJECT_ID"<1000)
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
1073 consistent gets
878 physical reads
0 redo size
95285 bytes sent via SQL*Net to client
1873 bytes received via SQL*Net from client
125 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1860 rows processed
--COMPRESS Table
16:18:24 SQL> select * from ARUL.AM_COMPRESS where object_id < 1000;
1860 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 1321419037
---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2603 | 526K| 74 (0)| 00:41:18 |
|* 1 | TABLE ACCESS FULL| AM_COMPRESS | 2603 | 526K| 74 (0)| 00:41:18 |
---------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OBJECT_ID"<1000)
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
507 consistent gets
311 physical reads
0 redo size
95192 bytes sent via SQL*Net to client
1873 bytes received via SQL*Net from client
125 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1860 rows processed
Comments:
v Physical reads is very less while accessing the compressed table compare as uncompressed table.
v Cost and elapsed time also very less compare as uncompressed table.
################ DML TEST CASE for INSERT statement ####################################################
##### Elapse time for insert query against Compressed and uncompressed tables #####
16:21:06 SQL> insert into Arul.AM_UNCOMPRESS (select * from Arul.AM_UNCOMPRESS);
122974 rows created.
Elapsed: 00:00:00.56
16:22:29 SQL> insert into Arul.AM_COMPRESS (select * from Arul.AM_UNCOMPRESS);
122974 rows created.
Elapsed: 00:00:01.80
Comments:
v For DML Operations, While insert into uncompress table is performing better than compressed table.
################### DML TEST CASE for UPDATE Statement ####################################################
##### Elapse time for Update query against Compressed and uncompressed tables #####
16:31:51 SQL> update Arul.AM_COMPRESS set owner='AM' where owner='SYS';
248200 rows updated.
Elapsed: 00:00:13.60
16:32:12 SQL> commit;
Commit complete.
16:32:20 SQL> update Arul.AM_UNCOMPRESS set owner='AM' where owner='SYS';
248200 rows updated.
Elapsed: 00:00:03.31
16:32:35 SQL> commit;
Commit complete.
Comments:
v DML Operations on uncompress ed table is performing better than on compressed table (Based on Elapsed time)
Reference: Don Burleson blog and other few sites.