Oracle Database Automation with Ansible AWX

In the video below I demonstrate how a simple Ansible playbook can be used to automate the cloning of multiple Oracle databases in parallel.

In the demo I use the AWX Web UI to run the playbook, the AWX project is an Open Source community project sponsored by RedHat, it can be considered the ‘bleed-in edge’ of RedHat Ansible Tower.

AWX is designed to be fast moving and is where the development takes place before being hardened and making it’s way into the enterprise ready RedHat supported Ansible Tower solution.

AWX is great from lab work, but for all Production workloads I would strongly recommend that Ansible Tower is considered.

Pure Code Developer Community

Visit the Pure Code Developer Community and click on the ‘Ansible Playbook Examples’ card to be directed to the public GitHub repository, you will find this and other Oracle playbook examples.

Advertisements

Getting started with Ansible and Windows

Background

I spend most of my time working with Linux but occasionally I come across Oracle on Windows, so I thought it’s about time that I tried my hand at using Ansible on Windows.

Ansible on Windows

You can not currently run Ansible on Windows, but you can manage Windows servers from a Linux or Mac Ansible control machine.

Ansible uses ‘SSH’ to manage Linux servers, on Windows Ansible requires ‘winRM’ Windows Remote Manager services. The Ansible docs details prerequisites, which include PowerShell v3 and .Net 4.0 and winRM, the steps to set-up winRM can be found here.

If you read my previous Blog Post Getting started with Ansible and Oracle you will have seen me use ‘ping’ to check connectivity to a Linux server, to do this in a Windows environment we use ‘win_ping’ module via the -m option.

win-ping
You can see that the Ansible ‘ping’ and ‘win_ping’ both return the familiar ‘pong’ message, indicating good connectivity.

Ok, that’s pretty cool, now let’s try running the Windows ‘whoami.exe’ using the ‘win_command’ module.

win-whoami
And a quick ‘Hello World’ test via PowerShell and the ‘win_shell’ module.

HelloWorld

Next steps, create some Windows playbooks and test, but that’s for another Blog.

Ansible 2.6 Now available

At the beginning of July, Ansible 2.6 was released, this release depreciates a few commands, fixes some issues and extends Cloud support for Amazon, Azure & Google.

Ansible 2.6 also includes a few new and updated Pure Storage FlashArray and for the first time FlashBlade modules:

Pure Storage FlashArray

purefa_host – Enable mixed protocol hosts
purefa_hg – Modify existing hostgroups
purefa_ds – Manage Directory Service configuration
purefa_facts – Gather Facts information
purefa_pgsnap – Manage Protection Group snapshots

Pure Storage FlashBlade

purefb_fs – Manage filessystems
purefb_snap – Manage filesystem Snapshots

Ansible2.6.png

Pure Code Developer Community

Visit the Pure Code Developer Community and click on the ‘Ansible Playbook Examples’ card to be directed to the public GitHub repository for example playbooks.

Oracle EBS WebLogic 10.3 – SBU patching “GC overhead limit exceeded” error

When using the BSU (Smart Update) utility to patch your Oracle EBS WebLogic 10.3 you may experience “java.lang.OutOfMemoryError: GC overhead limit exceeded” errors.

[applvis@z-oracle1 bsu]$ ./bsu.sh -install -patch_download_dir=/u06/oracle/VIS/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=EQDE -prod_dir=/u06/oracle/VIS/fs1/FMW_Home/wlserver_10.3

Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded

at java.util.HashMap.inflateTable(HashMap.java:316)

at java.util.HashMap.put(HashMap.java:488)

at com.bea.cie.common.dao.xbean.XBeanDataHandler.loadPropertyMap(XBeanDataHandler.java:778)

at com.bea.cie.common.dao.xbean.XBeanDataHandler.(XBeanDataHandler.java:99)

at com.bea.cie.common.dao.xbean.XBeanDataHandler.createDataHandler(XBeanDataHandler.java:559)

at com.bea.cie.common.dao.xbean.XBeanDataHandler.getComplexValue(XBeanDataHandler.java:455)

at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getPatchDependencies(PatchCatalogHelper.java:442)

at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getPatchDependencies(PatchCatalogHelper.java:464)

at com.bea.plateng.patch.dao.cat.PatchCatalog.getPatchDependencies(PatchCatalog.java:56)

at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getInvalidatedPatchMap(PatchCatalogHelper.java:1621)

at com.bea.plateng.patch.PatchSystem.updatePatchCatalog(PatchSystem.java:436)

at com.bea.plateng.patch.PatchSystem.refresh(PatchSystem.java:130)

at com.bea.plateng.patch.PatchSystem.setCacheDir(PatchSystem.java:201)

at com.bea.plateng.patch.Patch.main(Patch.java:281)

Issue

The issue is due to the fact that the bsu.sh script does not allocate adequate memory to the JAVA process.

Resolution

To fix this backup the current bsu.sh script and amend the initial JAVA memory allocation (Xms) and maximum heap size (Xms) values to address the memory issues e.g.

MEM_ARGS=”-Xms512m -Xmx1024m -XX:+UseParallelGC”

to

MEM_ARGS=”-Xms2048m -Xmx2048m -XX:+UseParallelGC”

Results

[applvis@z-oracle1 bsu]$ ./bsu.sh -install -patch_download_dir=/u06/oracle/VIS/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=EQDE -prod_dir=/u06/oracle/VIS/fs1/FMW_Home/wlserver_10.3
Checking for conflicts…….
No conflict(s) detected

Installing Patch ID: EQDE.
Result: Success

Creating Oracle HCC (Hybrid Columnar Compression) on your Laptop

In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.

If you don’t have access to an Oracle storage platform, you may want to visit my previous posts in the series Part I or Part II

Two types of Oracle HCC Compression

Warehouse (Query) compression

For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.

Archive compression

With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.

Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.

Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:

  • Query High ~10:1
  • Query Low ~6:1
  • Archive Low ~10:1
  • Archive High ~15:1

Let’s see how we get on.

The Test

For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.

~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v

Create Test Tables

CONNECT pure/pure;

Prompt No Compression
create table customers as select * from soe.customers;
commit;

Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;
commit;

Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;
commit;

Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;
commit;

Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;
commit;

Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;
commit;

Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;
commit;

If you experience the in-famous ORA-64307 error message and you are using the Oracle ZFS Simulator, re-visit my Part I & Part II Blogs and check your configuration.

ORA-64307

Results

Ok, let’s see what compression ratios we achieved.

set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression

SELECT SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, sum(BYTES)/1024/1024 MB
FROM DBA_SEGMENTS DS
WHERE DA.OWNER=’PURE’
GROUP BY SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, BYTES;

HCC_compression

Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.

However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.

Table Name Achieved Ratio
CUSTOMERS  1 : 1
CUSTOMERS_CLASSIC 1.31 : 1
CUSTOMERS_OLTP 1.10 : 1
CUSTOMERS_QUERY_LOW 2.29 : 1
CUSTOMERS_QUERY_HIGH 3.37 : 1
CUSTOMERS_ARCHIVE_LOW 4.00 : 1
CUSTOMERS_ARCHIVE_HIGH 5.33 : 1

Getting started with the Oracle ZFS Storage VMware Simulator (Part II)

In this Blog post I will share how your newly created Oracle ZFS Storage VMware Simulator can be used to deliver storage for an Oracle 12c database via dNFS and also support HCC (Hybrid Columnar Compression).

Hopefully you have followed Part I of my Blog post on Getting Started with the Oracle ZFS Storage VMware Simulator and already have a working Virtual Machine, if not probably a good time to visit Part I and try to build the simulator.

Oracle dNFS

If you have not set-up or used Oracle dNFS before, you may want to check-out a couple of my previous posts on Oracle dNFS, Using Oracle dNFS Multi-Path and also Oracle dNFS Throughput Testing as these provide some useful background and some additional information.

On my Oracle 12c Linux database server I have created an a directory for the mount point and an oranfstab file in $ORACLE_HOM/dbs thus:

server: zfsa
local: 192.168.56.10  path: 172.16.148.133
nfs_version: nfsv3
export: /export/oradata   mount: /u02/app/oracle/oradata

$ORACLE_HOME/dbs/oranfstab

The parameters used in my oranfstab are described below:

server
The NFS server name, this can be anything and is used for information rather than any network lookups.

local
Up to four paths on the database host, specified by IP address or by name.

path
Up to four network paths to the NFS server, specified either by IP address, or by name.

export
The exported path from the NFS server.

mount
The corresponding local mount point for the exported volume.

nfs_version
Specifies the NFS protocol version used by Direct NFS Client. Possible values are NFSv3NFSv4NFSv4.1, and pNFS. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.

Verify Oracle dNFS Usage

Restart your Oracle 12c database and view the Oracle Alert file, if you have successfully  configured your environment you should see that Oracle is reporting that the Oracle dNFS library is in use as it’s version.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

If you are not seeing any reference to the use of Oracle Direct NFS you may need to enable it using the steps detailed here.

We should also be able to see that we are using a single channel, IP addresses and the optional server name from our oranfstab.

Direct NFS: channel id [0] path [172.16.148.133] to filer [zfsa] via local [192.168.56.10] is UP

From the alert log file we can also confirm thatOracle is using our oranfstab file rather than the /etc/mtab, if your oranfstab is not correct your will see defined in mtab rather than defined in oranfstab, time to go back and fix-it.

Direct NFS: attempting to mount /export/oradata on filer zfsa defined in oranfstab

Additionaly, you can query the database using the v$dnfs_servers view to confirm the configuration.

SQL> COLUMN svrname FORMAT A20
SQL> COLUMN dirname FORMAT A50
SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

SVRNAME  DIRNAME           NFSVERSION
————— ———————-     ———-
zfsa            /export/oradata NFSv3.0

SNMP (Simple Network Management Protocol)

Finally, before we can start to use Oracle HCC (Hybrid Columnar Compression) within our Oracle 12c database we need to check that our simulator has been configured to use SNMP.

From our Oracle ZFS Storage VMware simulator check the status of services.

SNMP3

And if green, use snmpget to check the response from our simulator, providing the IP address and the Oracle MIB (Management Information Bases) below.

snmpget -v1 -c public 172.16.148.133 1.3.6.1.4.1.42.2.225.1.4.2.0

If all OK, snmpget should return a Sun Storage string, this is required to allow us to use the HCC features which are limited to Oracle Exadata / storage platforms.

SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: “Sun Storage 7000”

If you have successfully built your simulator and configured your Oracle 12c environment you are now ready to try to try out some of the HCC features.

In the next Blog post we will attempt to create database objects within and outside of our Oracle ZFS Storage VMware simulator to check our Oracle 12c database is able to use HCC data.

Getting started with the Oracle ZFS Storage VMware Simulator (Part I)

If your like me, you may sometimes need to do some research work with Oracle HCC (Hybrid Columnar Compression), but don’t always have access to an Oracle Exadata machine or ZFS Appliance, then the ZFSA simulator may be your saviour.

On my Mac I use both VirtualBox and VMware Fusion, but for this Blog post I am going to be using VMware Fusion 8.5.10.

Downloading the OVA

You can download the Oracle Storage Appliance Simulator software by following the link  here

ZFS Storage Appliance Home

Once you have accepted the licence agreement you will have the option to download either the VirtualBox or VMware version

downloads

Once the downloads is complete you should have an OS8.7VMware.ova file available to use.

Installing the Virtual Machine

From VMware Fusion navigate to File -> Import and select you OVA file using the ‘Choose File…’ finder.

OVA

Give the VM a name and start the import.

Importing

Once imported you will be presented with a summary screen, if you have spare memory you could consider increasing the allocated memory to 4GB by clicking on the ‘Customise Settings’ otherwise click ‘Finish’

Finish

Boot

Watch the console whilst the VM boots and once up provide a Host Name, DNS Domain name, IP Address, IP Netmask, Default Router and DNS Server as well as a Password for your ‘root’ Super-User.

 

zfsa_browser

Once the initial configuration is complete you can use your browser to configure your ZFS Storage Appliance simulator using the link in the console and the password you just provided.

login

Welcome

If all has gone well you should be presented with the Oracle ZFS Storage Virtual Machine ‘Welcome’ page, click ‘START’ to begin the configuration.

Welcome.pngStep 1

Hit ‘Commit’ to use default network configuration.

Step1

Step 2

Click ‘COMMIT’ to use the DNS values provide during the initial boot.

Step2

Step 3

Again, hit ‘COMMIT’ to use the default NTP settings.

Step3

Step 4

And finally, hit ‘COMMIT’, to take the default Name Services.

Step4

Step 5

We now set up out Storage Pool by clicking on the ‘+’

Step5

PoolName

allocate1

Select the most appropriate Storage Profile for your use, as I will only be using this on my MacBook Pro I will select ‘Striped’ as data protection is not a concern, but I want maximum Performance and Capacity.

storageProfile

The ZFS Appliance will now present a Storage Profile summary screen.

Step5_1

Registration & Support

Enter your MOS details or click ‘LATER’ if you prefer, and confirm.

Support

We can now configure any required Services, LUNS and / or filesystems

Summary

NFS Filesystem

FileSystems

Click on the ‘+’ to create a new Filesystem (NFS Share), provide Name, User, Group and Permissions. e.g.

oradata

Shares2

As I am going to be using this share for my Oracle 12c database I need to configure the storage to use the same block size as my database.

Select the ‘General’ tab to gain access the ‘default’ Project settings, and update the Database record size to 8K in the inherited properties section.

General

SNMP (Simple Network Management Profile)

As I plan to use the ZFS Storage VM to allow my Oracle 12c database to use HCC (Hybrid Columnar Compression) I will configure SNMP and Enable the service.

This can be found hidden away under ‘Configuration’ -> ‘Services’

SysServices

SNMP1

The System Service should now show the SNMP service as Online.

SNMP3

In Part II I will share how our newly created NFS filesystem can be mounted on a Linux client via Oracle dNFS and used by an Oracle 12c database allowing HCC data management.

 

 

 

Oracle 12c Multi-database refresh with Ansible

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c multiple databases clones with Ansible

Before we jump into the demonstration let me detail my architecture and explain what you are going to see, first the Architecture.

The Architecture

z-oracle will be used as my Ansible control machine
z-oracle1 is where my production Oracle 12c database is hosted
z-oracle2 thru to z-oracle7 are my development / test database servers.
FlashArray will be used to deliver the storage data services.Ansible_Demo

The Ansible Playbook

The database_clone Ansible playbook performs the following steps:

  1. Performs crash-consistent storage snapshot of production database using REST API.
    1. The storage snapshot is instant, has no impact to the running production database, and does not require any changes to production server or database configuration.

  2. Shutdown non-production database(s) and unmount database filesystems.
  3. Refresh non-production volume(s) from latest storage snapshot.
  4. Mount database filesystems and start database(s).
    1. At this point the non-production database(s) are exact copies of the production database with the same name as production but consuming no space.

  5. Rename non-Production database, datafiles and directories.

If you have watched the video you will have seen that the 6 non-production databases have all been refreshed from production in less than 2 1/2 minutes from a single Ansible playbook.

Visit code.purestorage.com to see more Ansible examples and also find examples for other DevOPs tools and languages including Python, PowerShell, Puppet…

Part 3: Oracle 12c Point-in-Time-Recovery

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Point-in-Time-Recovery

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before simulating a the loss of datafile.

I then use another python script to determine the last snapshot before shutting the database down and rolling-back to the recovery point. Once the datafiles have been restored RMAN is used to roll-forward the database using the database log files performing a PITR (Point-in-Time-Recovery) with no data-loss.

Part 2: Oracle 12c database rewind

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c database rewind

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before rewinding the database back to the last crash-consistent snapshot enabling rapid, iterative testing.