Creating Oracle HCC (Hybrid Columnar Compression) on your Laptop

In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.

If you don’t have access to an Oracle storage platform, you may want to visit my previous posts in the series Part I or Part II

Two types of Oracle HCC Compression

Warehouse (Query) compression

For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.

Archive compression

With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.

Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.

Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:

  • Query High ~10:1
  • Query Low ~6:1
  • Archive Low ~10:1
  • Archive High ~15:1

Let’s see how we get on.

The Test

For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.

~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v

Create Test Tables

CONNECT pure/pure;

Prompt No Compression
create table customers as select * from soe.customers;
commit;

Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;
commit;

Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;
commit;

Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;
commit;

Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;
commit;

Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;
commit;

Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;
commit;

If you experience the in-famous ORA-64307 error message and you are using the Oracle ZFS Simulator, re-visit my Part I & Part II Blogs and check your configuration.

ORA-64307

Results

Ok, let’s see what compression ratios we achieved.

set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression

SELECT SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, sum(BYTES)/1024/1024 MB
FROM DBA_SEGMENTS DS
WHERE DA.OWNER=’PURE’
GROUP BY SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, BYTES;

HCC_compression

Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.

However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.

Table Name Achieved Ratio
CUSTOMERS  1 : 1
CUSTOMERS_CLASSIC 1.31 : 1
CUSTOMERS_OLTP 1.10 : 1
CUSTOMERS_QUERY_LOW 2.29 : 1
CUSTOMERS_QUERY_HIGH 3.37 : 1
CUSTOMERS_ARCHIVE_LOW 4.00 : 1
CUSTOMERS_ARCHIVE_HIGH 5.33 : 1
Advertisements

Getting started with the Oracle ZFS Storage VMware Simulator (Part II)

In this Blog post I will share how your newly created Oracle ZFS Storage VMware Simulator can be used to deliver storage for an Oracle 12c database via dNFS and also support HCC (Hybrid Columnar Compression).

Hopefully you have followed Part I of my Blog post on Getting Started with the Oracle ZFS Storage VMware Simulator and already have a working Virtual Machine, if not probably a good time to visit Part I and try to build the simulator.

Oracle dNFS

If you have not set-up or used Oracle dNFS before, you may want to check-out a couple of my previous posts on Oracle dNFS, Using Oracle dNFS Multi-Path and also Oracle dNFS Throughput Testing as these provide some useful background and some additional information.

On my Oracle 12c Linux database server I have created an a directory for the mount point and an oranfstab file in $ORACLE_HOM/dbs thus:

server: zfsa
local: 192.168.56.10  path: 172.16.148.133
nfs_version: nfsv3
export: /export/oradata   mount: /u02/app/oracle/oradata

$ORACLE_HOME/dbs/oranfstab

The parameters used in my oranfstab are described below:

server
The NFS server name, this can be anything and is used for information rather than any network lookups.

local
Up to four paths on the database host, specified by IP address or by name.

path
Up to four network paths to the NFS server, specified either by IP address, or by name.

export
The exported path from the NFS server.

mount
The corresponding local mount point for the exported volume.

nfs_version
Specifies the NFS protocol version used by Direct NFS Client. Possible values are NFSv3NFSv4NFSv4.1, and pNFS. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.

Verify Oracle dNFS Usage

Restart your Oracle 12c database and view the Oracle Alert file, if you have successfully  configured your environment you should see that Oracle is reporting that the Oracle dNFS library is in use as it’s version.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

If you are not seeing any reference to the use of Oracle Direct NFS you may need to enable it using the steps detailed here.

We should also be able to see that we are using a single channel, IP addresses and the optional server name from our oranfstab.

Direct NFS: channel id [0] path [172.16.148.133] to filer [zfsa] via local [192.168.56.10] is UP

From the alert log file we can also confirm thatOracle is using our oranfstab file rather than the /etc/mtab, if your oranfstab is not correct your will see defined in mtab rather than defined in oranfstab, time to go back and fix-it.

Direct NFS: attempting to mount /export/oradata on filer zfsa defined in oranfstab

Additionaly, you can query the database using the v$dnfs_servers view to confirm the configuration.

SQL> COLUMN svrname FORMAT A20
SQL> COLUMN dirname FORMAT A50
SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

SVRNAME  DIRNAME           NFSVERSION
————— ———————-     ———-
zfsa            /export/oradata NFSv3.0

SNMP (Simple Network Management Protocol)

Finally, before we can start to use Oracle HCC (Hybrid Columnar Compression) within our Oracle 12c database we need to check that our simulator has been configured to use SNMP.

From our Oracle ZFS Storage VMware simulator check the status of services.

SNMP3

And if green, use snmpget to check the response from our simulator, providing the IP address and the Oracle MIB (Management Information Bases) below.

snmpget -v1 -c public 172.16.148.133 1.3.6.1.4.1.42.2.225.1.4.2.0

If all OK, snmpget should return a Sun Storage string, this is required to allow us to use the HCC features which are limited to Oracle Exadata / storage platforms.

SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: “Sun Storage 7000”

If you have successfully built your simulator and configured your Oracle 12c environment you are now ready to try to try out some of the HCC features.

In the next Blog post we will attempt to create database objects within and outside of our Oracle ZFS Storage VMware simulator to check our Oracle 12c database is able to use HCC data.

Getting started with the Oracle ZFS Storage VMware Simulator (Part I)

If your like me, you may sometimes need to do some research work with Oracle HCC (Hybrid Columnar Compression), but don’t always have access to an Oracle Exadata machine or ZFS Appliance, then the ZFSA simulator may be your saviour.

On my Mac I use both VirtualBox and VMware Fusion, but for this Blog post I am going to be using VMware Fusion 8.5.10.

Downloading the OVA

You can download the Oracle Storage Appliance Simulator software by following the link  here

ZFS Storage Appliance Home

Once you have accepted the licence agreement you will have the option to download either the VirtualBox or VMware version

downloads

Once the downloads is complete you should have an OS8.7VMware.ova file available to use.

Installing the Virtual Machine

From VMware Fusion navigate to File -> Import and select you OVA file using the ‘Choose File…’ finder.

OVA

Give the VM a name and start the import.

Importing

Once imported you will be presented with a summary screen, if you have spare memory you could consider increasing the allocated memory to 4GB by clicking on the ‘Customise Settings’ otherwise click ‘Finish’

Finish

Boot

Watch the console whilst the VM boots and once up provide a Host Name, DNS Domain name, IP Address, IP Netmask, Default Router and DNS Server as well as a Password for your ‘root’ Super-User.

 

zfsa_browser

Once the initial configuration is complete you can use your browser to configure your ZFS Storage Appliance simulator using the link in the console and the password you just provided.

login

Welcome

If all has gone well you should be presented with the Oracle ZFS Storage Virtual Machine ‘Welcome’ page, click ‘START’ to begin the configuration.

Welcome.pngStep 1

Hit ‘Commit’ to use default network configuration.

Step1

Step 2

Click ‘COMMIT’ to use the DNS values provide during the initial boot.

Step2

Step 3

Again, hit ‘COMMIT’ to use the default NTP settings.

Step3

Step 4

And finally, hit ‘COMMIT’, to take the default Name Services.

Step4

Step 5

We now set up out Storage Pool by clicking on the ‘+’

Step5

PoolName

allocate1

Select the most appropriate Storage Profile for your use, as I will only be using this on my MacBook Pro I will select ‘Striped’ as data protection is not a concern, but I want maximum Performance and Capacity.

storageProfile

The ZFS Appliance will now present a Storage Profile summary screen.

Step5_1

Registration & Support

Enter your MOS details or click ‘LATER’ if you prefer, and confirm.

Support

We can now configure any required Services, LUNS and / or filesystems

Summary

NFS Filesystem

FileSystems

Click on the ‘+’ to create a new Filesystem (NFS Share), provide Name, User, Group and Permissions. e.g.

oradata

Shares2

As I am going to be using this share for my Oracle 12c database I need to configure the storage to use the same block size as my database.

Select the ‘General’ tab to gain access the ‘default’ Project settings, and update the Database record size to 8K in the inherited properties section.

General

SNMP (Simple Network Management Profile)

As I plan to use the ZFS Storage VM to allow my Oracle 12c database to use HCC (Hybrid Columnar Compression) I will configure SNMP and Enable the service.

This can be found hidden away under ‘Configuration’ -> ‘Services’

SysServices

SNMP1

The System Service should now show the SNMP service as Online.

SNMP3

In Part II I will share how our newly created NFS filesystem can be mounted on a Linux client via Oracle dNFS and used by an Oracle 12c database allowing HCC data management.