Getting started with the Oracle ZFS Storage VMware Simulator (Part II)

In this Blog post I will share how your newly created Oracle ZFS Storage VMware Simulator can be used to deliver storage for an Oracle 12c database via dNFS and also support HCC (Hybrid Columnar Compression).

Hopefully you have followed Part I of my Blog post on Getting Started with the Oracle ZFS Storage VMware Simulator and already have a working Virtual Machine, if not probably a good time to visit Part I and try to build the simulator.

Oracle dNFS

If you have not set-up or used Oracle dNFS before, you may want to check-out a couple of my previous posts on Oracle dNFS, Using Oracle dNFS Multi-Path and also Oracle dNFS Throughput Testing as these provide some useful background and some additional information.

On my Oracle 12c Linux database server I have created an a directory for the mount point and an oranfstab file in $ORACLE_HOM/dbs thus:

server: zfsa
local: 192.168.56.10  path: 172.16.148.133
nfs_version: nfsv3
export: /export/oradata   mount: /u02/app/oracle/oradata

$ORACLE_HOME/dbs/oranfstab

The parameters used in my oranfstab are described below:

server
The NFS server name, this can be anything and is used for information rather than any network lookups.

local
Up to four paths on the database host, specified by IP address or by name.

path
Up to four network paths to the NFS server, specified either by IP address, or by name.

export
The exported path from the NFS server.

mount
The corresponding local mount point for the exported volume.

nfs_version
Specifies the NFS protocol version used by Direct NFS Client. Possible values are NFSv3NFSv4NFSv4.1, and pNFS. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.

Verify Oracle dNFS Usage

Restart your Oracle 12c database and view the Oracle Alert file, if you have successfully  configured your environment you should see that Oracle is reporting that the Oracle dNFS library is in use as it’s version.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

If you are not seeing any reference to the use of Oracle Direct NFS you may need to enable it using the steps detailed here.

We should also be able to see that we are using a single channel, IP addresses and the optional server name from our oranfstab.

Direct NFS: channel id [0] path [172.16.148.133] to filer [zfsa] via local [192.168.56.10] is UP

From the alert log file we can also confirm thatOracle is using our oranfstab file rather than the /etc/mtab, if your oranfstab is not correct your will see defined in mtab rather than defined in oranfstab, time to go back and fix-it.

Direct NFS: attempting to mount /export/oradata on filer zfsa defined in oranfstab

Additionaly, you can query the database using the v$dnfs_servers view to confirm the configuration.

SQL> COLUMN svrname FORMAT A20
SQL> COLUMN dirname FORMAT A50
SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

SVRNAME  DIRNAME           NFSVERSION
————— ———————-     ———-
zfsa            /export/oradata NFSv3.0

SNMP (Simple Network Management Protocol)

Finally, before we can start to use Oracle HCC (Hybrid Columnar Compression) within our Oracle 12c database we need to check that our simulator has been configured to use SNMP.

From our Oracle ZFS Storage VMware simulator check the status of services.

SNMP3

And if green, use snmpget to check the response from our simulator, providing the IP address and the Oracle MIB (Management Information Bases) below.

snmpget -v1 -c public 172.16.148.133 1.3.6.1.4.1.42.2.225.1.4.2.0

If all OK, snmpget should return a Sun Storage string, this is required to allow us to use the HCC features which are limited to Oracle Exadata / storage platforms.

SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: “Sun Storage 7000”

If you have successfully built your simulator and configured your Oracle 12c environment you are now ready to try to try out some of the HCC features.

In the next Blog post we will attempt to create database objects within and outside of our Oracle ZFS Storage VMware simulator to check our Oracle 12c database is able to use HCC data.

Advertisements

Getting started with the Oracle ZFS Storage VMware Simulator (Part I)

If your like me, you may sometimes need to do some research work with Oracle HCC (Hybrid Columnar Compression), but don’t always have access to an Oracle Exadata machine or ZFS Appliance, then the ZFSA simulator may be your saviour.

On my Mac I use both VirtualBox and VMware Fusion, but for this Blog post I am going to be using VMware Fusion 8.5.10.

Downloading the OVA

You can download the Oracle Storage Appliance Simulator software by following the link  here

ZFS Storage Appliance Home

Once you have accepted the licence agreement you will have the option to download either the VirtualBox or VMware version

downloads

Once the downloads is complete you should have an OS8.7VMware.ova file available to use.

Installing the Virtual Machine

From VMware Fusion navigate to File -> Import and select you OVA file using the ‘Choose File…’ finder.

OVA

Give the VM a name and start the import.

Importing

Once imported you will be presented with a summary screen, if you have spare memory you could consider increasing the allocated memory to 4GB by clicking on the ‘Customise Settings’ otherwise click ‘Finish’

Finish

Boot

Watch the console whilst the VM boots and once up provide a Host Name, DNS Domain name, IP Address, IP Netmask, Default Router and DNS Server as well as a Password for your ‘root’ Super-User.

 

zfsa_browser

Once the initial configuration is complete you can use your browser to configure your ZFS Storage Appliance simulator using the link in the console and the password you just provided.

login

Welcome

If all has gone well you should be presented with the Oracle ZFS Storage Virtual Machine ‘Welcome’ page, click ‘START’ to begin the configuration.

Welcome.pngStep 1

Hit ‘Commit’ to use default network configuration.

Step1

Step 2

Click ‘COMMIT’ to use the DNS values provide during the initial boot.

Step2

Step 3

Again, hit ‘COMMIT’ to use the default NTP settings.

Step3

Step 4

And finally, hit ‘COMMIT’, to take the default Name Services.

Step4

Step 5

We now set up out Storage Pool by clicking on the ‘+’

Step5

PoolName

allocate1

Select the most appropriate Storage Profile for your use, as I will only be using this on my MacBook Pro I will select ‘Striped’ as data protection is not a concern, but I want maximum Performance and Capacity.

storageProfile

The ZFS Appliance will now present a Storage Profile summary screen.

Step5_1

Registration & Support

Enter your MOS details or click ‘LATER’ if you prefer, and confirm.

Support

We can now configure any required Services, LUNS and / or filesystems

Summary

NFS Filesystem

FileSystems

Click on the ‘+’ to create a new Filesystem (NFS Share), provide Name, User, Group and Permissions. e.g.

oradata

Shares2

As I am going to be using this share for my Oracle 12c database I need to configure the storage to use the same block size as my database.

Select the ‘General’ tab to gain access the ‘default’ Project settings, and update the Database record size to 8K in the inherited properties section.

General

SNMP (Simple Network Management Profile)

As I plan to use the ZFS Storage VM to allow my Oracle 12c database to use HCC (Hybrid Columnar Compression) I will configure SNMP and Enable the service.

This can be found hidden away under ‘Configuration’ -> ‘Services’

SysServices

SNMP1

The System Service should now show the SNMP service as Online.

SNMP3

In Part II I will share how our newly created NFS filesystem can be mounted on a Linux client via Oracle dNFS and used by an Oracle 12c database allowing HCC data management.

 

 

 

Oracle 12c Multi-database refresh with Ansible

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c multiple databases clones with Ansible

Before we jump into the demonstration let me detail my architecture and explain what you are going to see, first the Architecture.

The Architecture

z-oracle will be used as my Ansible control machine
z-oracle1 is where my production Oracle 12c database is hosted
z-oracle2 thru to z-oracle7 are my development / test database servers.
FlashArray will be used to deliver the storage data services.Ansible_Demo

The Ansible Playbook

The database_clone Ansible playbook performs the following steps:

  1. Performs crash-consistent storage snapshot of production database using REST API.
    1. The storage snapshot is instant, has no impact to the running production database, and does not require any changes to production server or database configuration.

  2. Shutdown non-production database(s) and unmount database filesystems.
  3. Refresh non-production volume(s) from latest storage snapshot.
  4. Mount database filesystems and start database(s).
    1. At this point the non-production database(s) are exact copies of the production database with the same name as production but consuming no space.

  5. Rename non-Production database, datafiles and directories.

If you have watched the video you will have seen that the 6 non-production databases have all been refreshed from production in less than 2 1/2 minutes from a single Ansible playbook.

Visit code.purestorage.com to see more Ansible examples and also find examples for other DevOPs tools and languages including Python, PowerShell, Puppet…

Part 3: Oracle 12c Point-in-Time-Recovery

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Point-in-Time-Recovery

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before simulating a the loss of datafile.

I then use another python script to determine the last snapshot before shutting the database down and rolling-back to the recovery point. Once the datafiles have been restored RMAN is used to roll-forward the database using the database log files performing a PITR (Point-in-Time-Recovery) with no data-loss.