On-line migration of Oracle 18c filesystem datafiles to Oracle ASM

Background

Prior to Oracle 12c moving non-ASM data files required taking data files ‘offline’ and negotiating an outage with the business, since Oracle 12c this is has no longer been the case.

I was recently challenged with moving a good size filesystem 18c Oracle database from an existing storage platform to a new storage array.

Online_Data_Move

Prepare ASM Disk Groups

Use sqlplus to create the ASM disk group directories e.g.
$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm

Disk Group +DATA

SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/DATAFILE’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/CHANGETRACKING’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/TEMPFILE’;

Disk Group +CONTROL_REDO

SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/CONTROLFILE’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/ONLINELOG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/PARAMETERFILE’;

Disk Group +FRA

SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG’;
SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG/ARCHIVELOG’;

Identify Datafiles

column dname heading ‘Data File Name’ format a45
column filename heading ‘BCT File Name’ format a45
column tablespace_name heading ‘Tablespace Name’ format a25
column data_mb heading ‘Data MB’ format a10

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files

Data File Name                             Tablespace Name Data MB
———————————————-  ————————- ——
/u02/oradata/PSTG/system01.dbf             SYSTEM  880
/u02/oradata/PSTG/sysaux01.dbf             SYSAUX  990
/u02/oradata/PSTG/sh.dbf                                   SH  512,000
/u02/oradata/PSTG/users01.dbf                  USERS  5
/u02/oradata/PSTG/soe.dbf                              SOE  512,000
/u02/oradata/PSTG/undotbs01.dbf    UNDOTBS1  400

On-line Datafile move

I have a previously blogged on using the Oracle 12c ALTER DATABASE MOVE DATAFILE command and you can see the full syntax  on the link above.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/system01.dbf’ TO ‘+DATA/PSTG/DATAFILE/system01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sysaux01.dbf’ TO ‘+DATA/PSTG/DATAFILE/sysaux01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sh.dbf’ TO ‘+DATA/PSTG/DATAFILE/sh.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/users01.dbf’ TO ‘+DATA/PSTG/DATAFILE/users01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/undotbs01.dbf’ TO ‘+DATA/PSTG/DATAFILE/undotbs01.dbf’;

Monitor Move Progress

Below is a query which uses the V$SESSION_LONOPS view to report progress of the Online data file move operations.

column st heading ‘Start Time’ format a25
column time_remaining heading ‘Time|Remaining’
column elapsed_seconds heading ‘Elaspsed|Seconds’
column sofar heading ‘Sofar’ format a10
column total heading ‘Total’ format a10
column progress heading ‘Progress’ format a10
column message heading ‘Message’ format a100

select to_char(start_time,’DD-MON-YY hh24:mi:ss’) as st, time_remaining, elapsed_seconds, to_char(SOFAR/1024/1024,’999,999′) || ‘MB’ as sofar, to_char(TOTALWORK/1024/1024,’999,999′) || ‘MB’ as total, to_char((SOFAR/TOTALWORK)*100,’999′) || ‘%’ as progress, message from V$SESSION_LONGOPS;

Example Output

09-AUG-18 14:58:33 0 7 880MB 880MB 100%
Online data file move: data file 1: 922746880 out of 922746880 bytes done

09-AUG-18 15:01:07 0 7 990MB 990MB 100%
Online data file move: data file 3: 1038090240 out of 1038090240 bytes done

09-AUG-18 15:06:34 0 2767 512,000MB 512,000MB 100%
Online data file move: data file 5: 536870912000 out of 536870912000 bytes done

09-AUG-18 15:57:07 2757 38 6,962MB 512,000MB 1%
Online data file move: data file 2: 7300186112 out of 536870912000 bytes done

Updated Data file locations

We can check the file haven been relocated using the same query we ran earlier to identify the datafiles.

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files;

Data File Name Tablespace Name Data MB
——————————————— ————————- ——–
+DATA/PSTG/DATAFILE/system01.dbf SYSTEM 880
+DATA/PSTG/DATAFILE/sysaux01.dbf SYSAUX 1,040
+DATA/PSTG/DATAFILE/sh.dbf SH 512,000
+DATA/PSTG/DATAFILE/users01.dbf USERS 5
+DATA/PSTG/DATAFILE/soe.dbf SOE 512,000
+DATA/PSTG/DATAFILE/undotbs01.dbf UNDOTBS1 400

Advertisements

Oracle 18c disable / enable Archive Log Mode

Disable Archive Log Mode

[oracle@z-fbhost2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 – Production on Wed Aug 8 10:17:47 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

# Open database in mount state
SQL> startup mount
ORACLE instance started.

Total System Global Area 1.4227E+11 bytes
Fixed Size 29900024 bytes
Variable Size 2.2549E+10 bytes
Database Buffers 1.1919E+11 bytes
Redo Buffers 506970112 bytes
Database mounted.

# Check archive log status
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11646
Next log sequence to archive 11646
Current log sequence 11647

# Disable archive log mode
SQL> alter database noarchivelog;

Database altered.

# Open Database
SQL> alter database open;

Database altered.

# Confirm archive log status
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11647
Current log sequence 11648

SQL> show parameter log_archive_start

NAME TYPE VALUE
———————————— ———– ——————————
log_archive_start boolean FALSE

Enable Archive Log Mode

Shutdown the database using ‘shutdown immediate’ rather than ‘abort’ to avoid “ORA-00265 instance recovery required, cannot set ARCHIVELOG mode”

[oracle@z-fbhost2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 – Production on Wed Aug 8 10:31:01 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

# Startup in mount state
SQL> startup mount
ORACLE instance started.

Total System Global Area 1.4227E+11 bytes
Fixed Size 29900024 bytes
Variable Size 2.2549E+10 bytes
Database Buffers 1.1919E+11 bytes
Redo Buffers 506970112 bytes
Database mounted.

# Check archive log status
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11648
Current log sequence 11649

# Enable archive log
SQL> alter database archivelog;

Database altered.

# Open database
SQL> alter database open;

Database altered.

# Check archive log status
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11648
Next log sequence to archive 11649
Current log sequence 11649

Oracle 18c install DBCA failure – PRCR-1006 PRCR-1071 CRS-2566

Background

During my first Oracle 18c install I hit an installation issue with Oracle DBCA (Database Configuration Assistant) failing.

After looking at the installation log file “/u01/app/oraInventory/logs/InstallActions2018-07-26_12-18-47PM/installActions2018-07-26_12-18-47PM.log”

I could see the following:

PRCR-1006 : Failed to add resource ora.cdb1.db for cdb1
PRCR-1071 : Failed to register or update resource ora.cdb1.db
CRS-2566: User ‘oracle’ does not have sufficient permissions to operate on resource ‘ora.driver.afd’, which is part of the dependency specification.

Issue

The ‘oracle’ user primary group (dba) does not have privilege on ora.driver.afd

[oracle@z-oracle1 ~]$ id grid
uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

[grid@z-oracle1 ~]$ id grid
uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

Fix

Change the afd driver resource group to database user group e.g. ‘asmadmin’ to ‘dba’

[grid@z-oracle1 ~]$ crsctl stat res ora.driver.afd -p

ACL=owner:grid:rwx,pgrp:asmadmin:r-x,other::r–,user:grid:r-x

[grid@z-oracle1 ~]$ crsctl modify resource ora.driver.afd -attr “ACL=’owner:grid:rwx,pgrp:dba:r-x,other::r–,user:grid:r-x’ ” -init

[grid@z-oracle1 ~]$ crsctl stat res ora.driver.afd -p

ACL=owner:grid:rwx,pgrp:dba:r-x,other::r–,user:grid:r-x…

Once corrected I was able to resume the installation without any experiencing any further issues.

Hope this helps.

Oracle 18c Grid Infrastructure installation

Preparation

[root@z-oracle1 ~]# cd /u01/app/18.0.0/grid/cv/rpm
[root@z-oracle1 rpm]# ls
cvuqdisk-1.0.10-1.rpm

[root@z-oracle1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing… ################################# [100%]
Updating / installing…
1:cvuqdisk-1.0.10-1 ################################# [100%]

18c Grid Infrastructure installation

Confirm ‘grid’ owner groups.

[grid@z-oracle1 grid]$ id grid

uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

Create 18c Grid Infrastructure Oracle Base, Home and admin directories e.g.

[grid@z-oracle1 ~]$mkdir -p /u01/app/18.0.0/grid
[grid@z-oracle1 ~]$mkdir /u01/app/grid

Launch installer from terminal capable of starting X session.

[grid@z-oracle1 grid]$ cd $GRID_HOME
[grid@z-oracle1 grid]$ ./gridSetup.sh

Select Clustered, non-clustered or software only installation

1
Step 1

Update the ASM disk discovery path (if required), and select disk path for your +DATA disk group.

Before I started the 18c installer I created 3 Pure Storage volumes and configured UDEV rules to help identification. I can now select my 1TB +DATA volume and set the Redundancy to ‘External’ as the storage platform will provide the required data protection.

2
Step 2

Provide ASM passwords and click ‘Next’

3
Step 3

Register with Oracle Enterprise Manager, I will skip this for now, ‘Next’

4
Step 4

Confirm Oracle ASM ‘grid’ account groups, click ‘Next’

5
Step 5

Confirm pre-created Oracle Base location.

6
Step 6

Select ‘Next’ and run root.sh in a separate terminal window when prompted

7
Step 7

Review 18c Grid Infrastructure summary and click ‘Install’ to confirm installation

9
Step 9

Run root.sh from root window.

root
root.sh

Wait for the script to start the 18c Grid Infrastructure services, once complete click ‘OK’.

11.png
Step 11

If all has gone well, you should be presented with a ‘successful’ installation screen.

Now, lets look at our first look at an 18c ASM instance, if you observant you will see I have created 2 additional ASM DiskGroups +FRA & + CONTROL_REDO.

Will check my previous Blogs and if missing will share how you can create ASM Diskgroups using ASMCLI or AMSCA (UI).

1800.png

Come back in a few days to see my first 18c database Blog post.

Creating Oracle HCC (Hybrid Columnar Compression) on your Laptop

In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.

If you don’t have access to an Oracle storage platform, you may want to visit my previous posts in the series Part I or Part II

Two types of Oracle HCC Compression

Warehouse (Query) compression

For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.

Archive compression

With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.

Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.

Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:

  • Query High ~10:1
  • Query Low ~6:1
  • Archive Low ~10:1
  • Archive High ~15:1

Let’s see how we get on.

The Test

For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.

~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v

Create Test Tables

CONNECT pure/pure;

Prompt No Compression
create table customers as select * from soe.customers;
commit;

Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;
commit;

Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;
commit;

Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;
commit;

Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;
commit;

Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;
commit;

Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;
commit;

If you experience the in-famous ORA-64307 error message and you are using the Oracle ZFS Simulator, re-visit my Part I & Part II Blogs and check your configuration.

ORA-64307

Results

Ok, let’s see what compression ratios we achieved.

set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression

SELECT SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, sum(BYTES)/1024/1024 MB
FROM DBA_SEGMENTS DS
WHERE DA.OWNER=’PURE’
GROUP BY SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, BYTES;

HCC_compression

Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.

However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.

Table Name Achieved Ratio
CUSTOMERS  1 : 1
CUSTOMERS_CLASSIC 1.31 : 1
CUSTOMERS_OLTP 1.10 : 1
CUSTOMERS_QUERY_LOW 2.29 : 1
CUSTOMERS_QUERY_HIGH 3.37 : 1
CUSTOMERS_ARCHIVE_LOW 4.00 : 1
CUSTOMERS_ARCHIVE_HIGH 5.33 : 1

Getting started with the Oracle ZFS Storage VMware Simulator (Part II)

In this Blog post I will share how your newly created Oracle ZFS Storage VMware Simulator can be used to deliver storage for an Oracle 12c database via dNFS and also support HCC (Hybrid Columnar Compression).

Hopefully you have followed Part I of my Blog post on Getting Started with the Oracle ZFS Storage VMware Simulator and already have a working Virtual Machine, if not probably a good time to visit Part I and try to build the simulator.

Oracle dNFS

If you have not set-up or used Oracle dNFS before, you may want to check-out a couple of my previous posts on Oracle dNFS, Using Oracle dNFS Multi-Path and also Oracle dNFS Throughput Testing as these provide some useful background and some additional information.

On my Oracle 12c Linux database server I have created an a directory for the mount point and an oranfstab file in $ORACLE_HOM/dbs thus:

server: zfsa
local: 192.168.56.10  path: 172.16.148.133
nfs_version: nfsv3
export: /export/oradata   mount: /u02/app/oracle/oradata

$ORACLE_HOME/dbs/oranfstab

The parameters used in my oranfstab are described below:

server
The NFS server name, this can be anything and is used for information rather than any network lookups.

local
Up to four paths on the database host, specified by IP address or by name.

path
Up to four network paths to the NFS server, specified either by IP address, or by name.

export
The exported path from the NFS server.

mount
The corresponding local mount point for the exported volume.

nfs_version
Specifies the NFS protocol version used by Direct NFS Client. Possible values are NFSv3NFSv4NFSv4.1, and pNFS. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.

Verify Oracle dNFS Usage

Restart your Oracle 12c database and view the Oracle Alert file, if you have successfully  configured your environment you should see that Oracle is reporting that the Oracle dNFS library is in use as it’s version.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

If you are not seeing any reference to the use of Oracle Direct NFS you may need to enable it using the steps detailed here.

We should also be able to see that we are using a single channel, IP addresses and the optional server name from our oranfstab.

Direct NFS: channel id [0] path [172.16.148.133] to filer [zfsa] via local [192.168.56.10] is UP

From the alert log file we can also confirm thatOracle is using our oranfstab file rather than the /etc/mtab, if your oranfstab is not correct your will see defined in mtab rather than defined in oranfstab, time to go back and fix-it.

Direct NFS: attempting to mount /export/oradata on filer zfsa defined in oranfstab

Additionaly, you can query the database using the v$dnfs_servers view to confirm the configuration.

SQL> COLUMN svrname FORMAT A20
SQL> COLUMN dirname FORMAT A50
SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

SVRNAME  DIRNAME           NFSVERSION
————— ———————-     ———-
zfsa            /export/oradata NFSv3.0

SNMP (Simple Network Management Protocol)

Finally, before we can start to use Oracle HCC (Hybrid Columnar Compression) within our Oracle 12c database we need to check that our simulator has been configured to use SNMP.

From our Oracle ZFS Storage VMware simulator check the status of services.

SNMP3

And if green, use snmpget to check the response from our simulator, providing the IP address and the Oracle MIB (Management Information Bases) below.

snmpget -v1 -c public 172.16.148.133 1.3.6.1.4.1.42.2.225.1.4.2.0

If all OK, snmpget should return a Sun Storage string, this is required to allow us to use the HCC features which are limited to Oracle Exadata / storage platforms.

SNMPv2-SMI::enterprises.42.2.225.1.4.2.0 = STRING: “Sun Storage 7000”

If you have successfully built your simulator and configured your Oracle 12c environment you are now ready to try to try out some of the HCC features.

In the next Blog post we will attempt to create database objects within and outside of our Oracle ZFS Storage VMware simulator to check our Oracle 12c database is able to use HCC data.

Oracle 12c Multi-database refresh with Ansible

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c multiple databases clones with Ansible

Before we jump into the demonstration let me detail my architecture and explain what you are going to see, first the Architecture.

The Architecture

z-oracle will be used as my Ansible control machine
z-oracle1 is where my production Oracle 12c database is hosted
z-oracle2 thru to z-oracle7 are my development / test database servers.
FlashArray will be used to deliver the storage data services.Ansible_Demo

The Ansible Playbook

The database_clone Ansible playbook performs the following steps:

  1. Performs crash-consistent storage snapshot of production database using REST API.
    1. The storage snapshot is instant, has no impact to the running production database, and does not require any changes to production server or database configuration.

  2. Shutdown non-production database(s) and unmount database filesystems.
  3. Refresh non-production volume(s) from latest storage snapshot.
  4. Mount database filesystems and start database(s).
    1. At this point the non-production database(s) are exact copies of the production database with the same name as production but consuming no space.

  5. Rename non-Production database, datafiles and directories.

If you have watched the video you will have seen that the 6 non-production databases have all been refreshed from production in less than 2 1/2 minutes from a single Ansible playbook.

Visit code.purestorage.com to see more Ansible examples and also find examples for other DevOPs tools and languages including Python, PowerShell, Puppet…

Part 3: Oracle 12c Point-in-Time-Recovery

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Point-in-Time-Recovery

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before simulating a the loss of datafile.

I then use another python script to determine the last snapshot before shutting the database down and rolling-back to the recovery point. Once the datafiles have been restored RMAN is used to roll-forward the database using the database log files performing a PITR (Point-in-Time-Recovery) with no data-loss.

Part 2: Oracle 12c database rewind

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c database rewind

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before rewinding the database back to the last crash-consistent snapshot enabling rapid, iterative testing.

Part 1: Oracle 12c database reset

In this series of Blog posts I will share short videos demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Database Reset

The video starts by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before resetting the database back to the initial clone simulating an application test issue due to problems with the initial set-up and configuration.