ODC Appreciation Day : International conferences and making new friends

Background

Back in 2016 Tim aka ‘ORACLE-BASE’ launched the the first OTN (Oracle Technology Network) Appreciation Day, after the success of 2016 it was repeated again in 2017 under the new moniker of ODC (Oracle Developer Community).

Oracle Developer Community 2018

My ODC 2018 Blog post this year is about the amazing global Oracle community and the various events put on across the planet.

As an OracleAce and speaker, I am fortunate to get the opportunity to visit some fantastic countries, cities and venues. Experience different cultures, try local foods and make new friends, everywhere I go, the Oracle community is warm and welcoming and filled with passionate people who want to share experiences and ideas learn new things.

This year I have created some amazing memories, some of the highlights include:

  • Skydiving at MadridFly
  • A tour of Jerusalem
  • Experiencing a genuinue Finnish sauna and impromptu singing.
  • Drinking Polish vodka on a picture-perfect beach.

And many more, however the best thing about attending international conferences is catching up with old friends and hopefully making new ones

Looking forward to making some new memories and of course new friends at Oracle Open World in a couple of week. #SeeYouAtOpenWorld

Advertisements

On-line migration of Oracle 18c filesystem datafiles to Oracle ASM

Background

Prior to Oracle 12c moving non-ASM data files required taking data files ‘offline’ and negotiating an outage with the business, since Oracle 12c this is has no longer been the case.

I was recently challenged with moving a good size filesystem 18c Oracle database from an existing storage platform to a new storage array.

Online_Data_Move

Prepare ASM Disk Groups

Use sqlplus to create the ASM disk group directories e.g.
$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm

Disk Group +DATA

SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/DATAFILE’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/CHANGETRACKING’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/TEMPFILE’;

Disk Group +CONTROL_REDO

SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/CONTROLFILE’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/ONLINELOG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/PARAMETERFILE’;

Disk Group +FRA

SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG’;
SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG/ARCHIVELOG’;

Identify Datafiles

column dname heading ‘Data File Name’ format a45
column filename heading ‘BCT File Name’ format a45
column tablespace_name heading ‘Tablespace Name’ format a25
column data_mb heading ‘Data MB’ format a10

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files

Data File Name                             Tablespace Name Data MB
———————————————-  ————————- ——
/u02/oradata/PSTG/system01.dbf             SYSTEM  880
/u02/oradata/PSTG/sysaux01.dbf             SYSAUX  990
/u02/oradata/PSTG/sh.dbf                                   SH  512,000
/u02/oradata/PSTG/users01.dbf                  USERS  5
/u02/oradata/PSTG/soe.dbf                              SOE  512,000
/u02/oradata/PSTG/undotbs01.dbf    UNDOTBS1  400

On-line Datafile move

I have a previously blogged on using the Oracle 12c ALTER DATABASE MOVE DATAFILE command and you can see the full syntax  on the link above.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/system01.dbf’ TO ‘+DATA/PSTG/DATAFILE/system01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sysaux01.dbf’ TO ‘+DATA/PSTG/DATAFILE/sysaux01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sh.dbf’ TO ‘+DATA/PSTG/DATAFILE/sh.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/users01.dbf’ TO ‘+DATA/PSTG/DATAFILE/users01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/undotbs01.dbf’ TO ‘+DATA/PSTG/DATAFILE/undotbs01.dbf’;

Monitor Move Progress

Below is a query which uses the V$SESSION_LONOPS view to report progress of the Online data file move operations.

column st heading ‘Start Time’ format a25
column time_remaining heading ‘Time|Remaining’
column elapsed_seconds heading ‘Elaspsed|Seconds’
column sofar heading ‘Sofar’ format a10
column total heading ‘Total’ format a10
column progress heading ‘Progress’ format a10
column message heading ‘Message’ format a100

select to_char(start_time,’DD-MON-YY hh24:mi:ss’) as st, time_remaining, elapsed_seconds, to_char(SOFAR/1024/1024,’999,999′) || ‘MB’ as sofar, to_char(TOTALWORK/1024/1024,’999,999′) || ‘MB’ as total, to_char((SOFAR/TOTALWORK)*100,’999′) || ‘%’ as progress, message from V$SESSION_LONGOPS;

Example Output

09-AUG-18 14:58:33 0 7 880MB 880MB 100%
Online data file move: data file 1: 922746880 out of 922746880 bytes done

09-AUG-18 15:01:07 0 7 990MB 990MB 100%
Online data file move: data file 3: 1038090240 out of 1038090240 bytes done

09-AUG-18 15:06:34 0 2767 512,000MB 512,000MB 100%
Online data file move: data file 5: 536870912000 out of 536870912000 bytes done

09-AUG-18 15:57:07 2757 38 6,962MB 512,000MB 1%
Online data file move: data file 2: 7300186112 out of 536870912000 bytes done

Updated Data file locations

We can check the file haven been relocated using the same query we ran earlier to identify the datafiles.

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files;

Data File Name Tablespace Name Data MB
——————————————— ————————- ——–
+DATA/PSTG/DATAFILE/system01.dbf SYSTEM 880
+DATA/PSTG/DATAFILE/sysaux01.dbf SYSAUX 1,040
+DATA/PSTG/DATAFILE/sh.dbf SH 512,000
+DATA/PSTG/DATAFILE/users01.dbf USERS 5
+DATA/PSTG/DATAFILE/soe.dbf SOE 512,000
+DATA/PSTG/DATAFILE/undotbs01.dbf UNDOTBS1 400

Creating new Oracle 18c ASM Disk Groups

Creating New Volumes

Create the new ASM volume(s) using your storage platform, for this blog I have created 3 volumes on a Pure FlashArray, simply providing names and sizes.

FA_createVol
Create new ASM Volume(s)

Note the Volume serial numbers,  these will be used in the multipath configuration wwid (World Wide Identifier)

FA_vols
New ASM volumes

SCSI storage device serial numbers are universally unique. Each one starts with a 36bit (9 hexadecimal digit) vendor ID. Because the vendor ID is always the same for Pure Storage FlashArray volumes (3624a9370), it is omitted from the CLI and GUI displays.

Linux Configuration

Create multipath entries in  ‘/etc/multipath.conf’ for the 3 new volumes, in the examples below the wwid (World Wide Identifier) is set to Vendor ID + Serial number. e.g. ‘3624a9370’ + ‘ff551cc46aee54bf9000126ab’.
Note, the ‘wwid’ needs to be in lowercase and the ‘alias’ name for ASM disks needs to be less than 30 characters, alphanumeric and only use the ‘_ ‘ special character. 

multipath

Update multipath Configuration

Flush unused multipath devices

# multipath -F

Reload multipath configuration:

# service multipathd reload

Load and display multipath configuration, device mapper and other components

# multipath -v2
# multipath -ll

Oracle ASM Disk Discovery

Before we update our UDEV rules, lets check our Oracle ASM discovery disk string ‘ASM_DISKSTRING’.

Using asmcmd

ASMCMD> dsget
parameter:/u01/shared_config/ocr_configuration,  /dev/pureasm
profile:/u01/shared_config/ocr_configuration, /dev/pureasm

Using sqlplus

SQL> show parameter ASM_DISKSTRING;

asm_diskstring string /u01/shared_config/ocr_configuration, /dev/pureasm

We can now edit or create a UDEV rules file to reflect the ASM discovery string destination, the SYMLINK prefixes .

udev
/etc/udev/rules.d/99-oracle-asmdevices.rules

Reload UDEV rules as the root user using ‘udevadm trigger’ to set ownership and create links to the ASM disk discovery directory.

asmdisks
ls -la /dev/pureasm

We can now ready to create the ASM Disk Groups using the SQLPlus or the Oracle ASM Configuration Assistant ‘asmca’ UI.

I repeated this 3 times, creating +DATA. +FRA and +CONTROL_REDO which I will use in my next Blog Post, where I plan to share how to migrate from a file systems to Oracle ASM.

asmca
asmca

Oracle 18c disable / enable Archive Log Mode

Disable Archive Log Mode

[oracle@z-fbhost2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 – Production on Wed Aug 8 10:17:47 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

# Open database in mount state
SQL> startup mount
ORACLE instance started.

Total System Global Area 1.4227E+11 bytes
Fixed Size 29900024 bytes
Variable Size 2.2549E+10 bytes
Database Buffers 1.1919E+11 bytes
Redo Buffers 506970112 bytes
Database mounted.

# Check archive log status
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11646
Next log sequence to archive 11646
Current log sequence 11647

# Disable archive log mode
SQL> alter database noarchivelog;

Database altered.

# Open Database
SQL> alter database open;

Database altered.

# Confirm archive log status
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11647
Current log sequence 11648

SQL> show parameter log_archive_start

NAME TYPE VALUE
———————————— ———– ——————————
log_archive_start boolean FALSE

Enable Archive Log Mode

Shutdown the database using ‘shutdown immediate’ rather than ‘abort’ to avoid “ORA-00265 instance recovery required, cannot set ARCHIVELOG mode”

[oracle@z-fbhost2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 – Production on Wed Aug 8 10:31:01 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

# Startup in mount state
SQL> startup mount
ORACLE instance started.

Total System Global Area 1.4227E+11 bytes
Fixed Size 29900024 bytes
Variable Size 2.2549E+10 bytes
Database Buffers 1.1919E+11 bytes
Redo Buffers 506970112 bytes
Database mounted.

# Check archive log status
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11648
Current log sequence 11649

# Enable archive log
SQL> alter database archivelog;

Database altered.

# Open database
SQL> alter database open;

Database altered.

# Check archive log status
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 11648
Next log sequence to archive 11649
Current log sequence 11649

Oracle 18c install DBCA failure – PRCR-1006 PRCR-1071 CRS-2566

Background

During my first Oracle 18c install I hit an installation issue with Oracle DBCA (Database Configuration Assistant) failing.

After looking at the installation log file “/u01/app/oraInventory/logs/InstallActions2018-07-26_12-18-47PM/installActions2018-07-26_12-18-47PM.log”

I could see the following:

PRCR-1006 : Failed to add resource ora.cdb1.db for cdb1
PRCR-1071 : Failed to register or update resource ora.cdb1.db
CRS-2566: User ‘oracle’ does not have sufficient permissions to operate on resource ‘ora.driver.afd’, which is part of the dependency specification.

Issue

The ‘oracle’ user primary group (dba) does not have privilege on ora.driver.afd

[oracle@z-oracle1 ~]$ id grid
uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

[grid@z-oracle1 ~]$ id grid
uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

Fix

Change the afd driver resource group to database user group e.g. ‘asmadmin’ to ‘dba’

[grid@z-oracle1 ~]$ crsctl stat res ora.driver.afd -p

ACL=owner:grid:rwx,pgrp:asmadmin:r-x,other::r–,user:grid:r-x

[grid@z-oracle1 ~]$ crsctl modify resource ora.driver.afd -attr “ACL=’owner:grid:rwx,pgrp:dba:r-x,other::r–,user:grid:r-x’ ” -init

[grid@z-oracle1 ~]$ crsctl stat res ora.driver.afd -p

ACL=owner:grid:rwx,pgrp:dba:r-x,other::r–,user:grid:r-x…

Once corrected I was able to resume the installation without any experiencing any further issues.

Hope this helps.

Oracle 18c Grid Infrastructure installation

Preparation

[root@z-oracle1 ~]# cd /u01/app/18.0.0/grid/cv/rpm
[root@z-oracle1 rpm]# ls
cvuqdisk-1.0.10-1.rpm

[root@z-oracle1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing… ################################# [100%]
Updating / installing…
1:cvuqdisk-1.0.10-1 ################################# [100%]

18c Grid Infrastructure installation

Confirm ‘grid’ owner groups.

[grid@z-oracle1 grid]$ id grid

uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

Create 18c Grid Infrastructure Oracle Base, Home and admin directories e.g.

[grid@z-oracle1 ~]$mkdir -p /u01/app/18.0.0/grid
[grid@z-oracle1 ~]$mkdir /u01/app/grid

Launch installer from terminal capable of starting X session.

[grid@z-oracle1 grid]$ cd $GRID_HOME
[grid@z-oracle1 grid]$ ./gridSetup.sh

Select Clustered, non-clustered or software only installation

1
Step 1

Update the ASM disk discovery path (if required), and select disk path for your +DATA disk group.

Before I started the 18c installer I created 3 Pure Storage volumes and configured UDEV rules to help identification. I can now select my 1TB +DATA volume and set the Redundancy to ‘External’ as the storage platform will provide the required data protection.

2
Step 2

Provide ASM passwords and click ‘Next’

3
Step 3

Register with Oracle Enterprise Manager, I will skip this for now, ‘Next’

4
Step 4

Confirm Oracle ASM ‘grid’ account groups, click ‘Next’

5
Step 5

Confirm pre-created Oracle Base location.

6
Step 6

Select ‘Next’ and run root.sh in a separate terminal window when prompted

7
Step 7

Review 18c Grid Infrastructure summary and click ‘Install’ to confirm installation

9
Step 9

Run root.sh from root window.

root
root.sh

Wait for the script to start the 18c Grid Infrastructure services, once complete click ‘OK’.

11.png
Step 11

If all has gone well, you should be presented with a ‘successful’ installation screen.

Now, lets look at our first look at an 18c ASM instance, if you observant you will see I have created 2 additional ASM DiskGroups +FRA & + CONTROL_REDO.

Will check my previous Blogs and if missing will share how you can create ASM Diskgroups using ASMCLI or AMSCA (UI).

1800.png

Come back in a few days to see my first 18c database Blog post.

Enterprise Manager Cloud Control 13c (13.3) deploy Plug-Ins

At the beginning of July Oracle released Enterprise Manager Cloud Control 13.3 so I thought it was a good time to see what has changed and also document how to import an OEM Plug-In.

To get started, I created a new Oracle 12.2 database and then installed OEM 13.3 following the steps produced by my good friend Tim Hall aka (  Oracle-Base )

Download Plug-Ins

A good place to get Oracle Enterprise Manager Plug-In’s is from the Oracle Extensibility Exchange, here you will find many OEM Plug-Ins which have been developed by Oracle Partners and up-loaded for use by the Oracle Community.

OEE

Now search for the required OEM Plug-In

Extensibiliy_Exhange.png

Once downloaded transfer the file to your OEM server ready for importing.

Import the Plug-In

Logon to Enterprise Manager from the command line:

[oracle@z-oem 13c]$ emcli login -username=sysman
Enter password 
Login successful

Use emcli to import the local opal file. e.g

[oracle@z-oem 13c]$ emcli import_update -file=”/u01/app/oracle/13c/13.2.0.1.0_vlss.pure.xvp1_2000_0.opar” -omslocal
Processing update: Plug-in – Enterprise Manager for Pure Storage Flash Array
Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.

Deploy Plug-In

From the main screen, select the Setup Cog, then select Extensibility and then Plug-ins.

plugins.png

From the Plug-Ins page, select the ‘Pure Storage FlashArray’ plug-in in the ‘Servers, Storage and Network’ section

fa_plugin1.png
Then select Deploy On and choose Management Servers.

fa_1.png

If the prerequisites checks pass you can continue with the Plug-Ins workflow.

Provide sys credentials for the OEM repository, and then click ‘Next’

FA_Repo

At the step 4 ‘Review’, hit ‘Deploy’ to initiate the deployment process.

FA_4.png

Now Click on Status to monitor progress of the deployment.

FA_5.png

If the Plug-In deploys successfully you will see the ‘Status’ change to a green tick.

FA_success.png

You are now ready to configure and use the newly installed OEM 13c Plug-In

Creating Oracle HCC (Hybrid Columnar Compression) on your Laptop

In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.

If you don’t have access to an Oracle storage platform, you may want to visit my previous posts in the series Part I or Part II

Two types of Oracle HCC Compression

Warehouse (Query) compression

For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.

Archive compression

With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.

Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.

Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:

  • Query High ~10:1
  • Query Low ~6:1
  • Archive Low ~10:1
  • Archive High ~15:1

Let’s see how we get on.

The Test

For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.

~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v

Create Test Tables

CONNECT pure/pure;

Prompt No Compression
create table customers as select * from soe.customers;
commit;

Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;
commit;

Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;
commit;

Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;
commit;

Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;
commit;

Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;
commit;

Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;
commit;

If you experience the in-famous ORA-64307 error message and you are using the Oracle ZFS Simulator, re-visit my Part I & Part II Blogs and check your configuration.

ORA-64307

Results

Ok, let’s see what compression ratios we achieved.

set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression

SELECT SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, sum(BYTES)/1024/1024 MB
FROM DBA_SEGMENTS DS
WHERE DA.OWNER=’PURE’
GROUP BY SEGMENT_NAME, COMPRESSION, COMPRESS_FOR, BYTES;

HCC_compression

Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.

However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.

Table Name Achieved Ratio
CUSTOMERS  1 : 1
CUSTOMERS_CLASSIC 1.31 : 1
CUSTOMERS_OLTP 1.10 : 1
CUSTOMERS_QUERY_LOW 2.29 : 1
CUSTOMERS_QUERY_HIGH 3.37 : 1
CUSTOMERS_ARCHIVE_LOW 4.00 : 1
CUSTOMERS_ARCHIVE_HIGH 5.33 : 1

Part 3: Oracle 12c Point-in-Time-Recovery

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Point-in-Time-Recovery

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before simulating a the loss of datafile.

I then use another python script to determine the last snapshot before shutting the database down and rolling-back to the recovery point. Once the datafiles have been restored RMAN is used to roll-forward the database using the database log files performing a PITR (Point-in-Time-Recovery) with no data-loss.

Part 2: Oracle 12c database rewind

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c database rewind

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at code.purestorage.com) using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before rewinding the database back to the last crash-consistent snapshot enabling rapid, iterative testing.