In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.
Two types of Oracle HCC Compression
Warehouse (Query) compression
For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.
With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.
Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.
Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:
- Query High ~10:1
- Query Low ~6:1
- Archive Low ~10:1
- Archive High ~15:1
Let’s see how we get on.
For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.
~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v
Create Test Tables
Prompt No Compression
create table customers as select * from soe.customers;
Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;
Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;
Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;
Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;
Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;
Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;
Ok, let’s see what compression ratios we achieved.
set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression
SELECT SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, sum(BYTES)/1024/1024 MB
FROM DBA_SEGMENTS DS
GROUP BY SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, BYTES;
Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.
However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.
|Table Name||Achieved Ratio|
|CUSTOMERS||1 : 1|
|CUSTOMERS_CLASSIC||1.31 : 1|
|CUSTOMERS_OLTP||1.10 : 1|
|CUSTOMERS_QUERY_LOW||2.29 : 1|
|CUSTOMERS_QUERY_HIGH||3.37 : 1|
|CUSTOMERS_ARCHIVE_LOW||4.00 : 1|
|CUSTOMERS_ARCHIVE_HIGH||5.33 : 1|