Creating Oracle HCC (Hybrid Columnar Compression) on your Laptop

In this Blog we will take-out a newly created Oracle ZFS VMware Simulator for a test ride by trying to create some Oracle HCC (Hybrid Columnar Compression) data.

If you don’t have access to an Oracle storage platform, you may want to visit my previous posts in the series Part I or Part II

Two types of Oracle HCC Compression

Warehouse (Query) compression

For warehouse compression, the compression algorithm has been optimized for query performance, specifically for scan-oriented queries used heavily in data warehouses. This approach is ideal for tables that will be queried frequently.

Archive compression

With archive compression, the compression algorithm has been optimized for maximum storage savings. This approach is ideal for tables that are infrequently accessed.

Note that for compressing or decompressing data, archive compression may consume a significant amount of CPU compared to warehouse compression.

Achievable compression ratios vary depending upon the compression type and customer data, however Oracle reports seeing the following ratios:

  • Query High ~10:1
  • Query Low ~6:1
  • Archive Low ~10:1
  • Archive High ~15:1

Let’s see how we get on.

The Test

For this test I am going to use Dominic Giles SwingBench ‘customer’ table from the Sales Order Entry schema (SOE) as the source of my test data. The SwingBench wizard can be accessed from the Java application or command line, as you can see from the bellow I will be using the CLI interface.

~/swingbench/bin/oewizard -allindexes -cl -create -cs //localhost/PSTG -dba system -dbap oracle -ts soe -nopart -p soe -scale 4 -tc 8 -u soe -v

Create Test Tables

CONNECT pure/pure;

Prompt No Compression
create table customers as select * from soe.customers;

Prompt Basic Method
create table customers_classic compress as select * from customers where 1=2;
alter table customers_classic nologging;
insert /*+ append */ into customers_classic select * from customers;

Prompt OLTP Method
create table customers_oltp compress for oltp as select * from customers where 1=2;
insert into customers_oltp select * from customers;

Prompt Query Low
create table customers_query_low compress for query low as select * from customers where 1=2;
alter table customers_query_low nologging;
insert /*+ append */ into customers_query_low select * from customers;

Prompt Query High
create table customers_query_high compress for query high as select * from customers where 1=2;
alter table customers_query_high nologging;
insert /*+ append */ into customers_query_high select * from customers;

Prompt Archive Low
create table customers_archive_low compress for archive low as select * from customers where 1=2;
alter table customers_archive_low nologging;
insert /*+ append */ into customers_archive_low select * from customers;

Prompt Archive High
create table customers_archive_high compress for archive high as select * from customers where 1=2;
alter table customers_archive_high nologging;
insert /*+ append */ into customers_archive_high select * from customers;

If you experience the in-famous ORA-64307 error message and you are using the Oracle ZFS Simulator, re-visit my Part I & Part II Blogs and check your configuration.



Ok, let’s see what compression ratios we achieved.

set feed off
set pages 100
col TABLE_NAME heading ‘Table Name’ format a25
col OWNER heading ‘Owner’ format a20
col COMPRESS_FOR heading ‘Compress For’ format a20
col COMPRESSION heading ‘Compression’ format a20
set lines 100
set echo off
set timing on
Prompt Tables using compression



Achieved ratios are not maybe as high as expected, but the table was pretty small 512MB, so my next project will be to repeat the tests with a larger data set and compare results.

However, the compression ratios are secondary, what this test did demonstrate is the ability to experiment and test out HCC data without access to an Oracle Engineered solution.

Table Name Achieved Ratio

Getting started with the Oracle ZFS Storage VMware Simulator (Part II)

In this Blog post I will share how your newly created Oracle ZFS Storage VMware Simulator can be used to deliver storage for an Oracle 12c database via dNFS and also support HCC (Hybrid Columnar Compression).

Hopefully you have followed Part I of my Blog post on Getting Started with the Oracle ZFS Storage VMware Simulator and already have a working Virtual Machine, if not probably a good time to visit Part I and try to build the simulator.

Oracle dNFS

If you have not set-up or used Oracle dNFS before, you may want to check-out a couple of my previous posts on Oracle dNFS, Using Oracle dNFS Multi-Path and also Oracle dNFS Throughput Testing as these provide some useful background and some additional information.

On my Oracle 12c Linux database server I have created an a directory for the mount point and an oranfstab file in $ORACLE_HOM/dbs thus:

server: zfsa
local:  path:
nfs_version: nfsv3
export: /export/oradata   mount: /u02/app/oracle/oradata


The parameters used in my oranfstab are described below:

The NFS server name, this can be anything and is used for information rather than any network lookups.

Up to four paths on the database host, specified by IP address or by name.

Up to four network paths to the NFS server, specified either by IP address, or by name.

The exported path from the NFS server.

The corresponding local mount point for the exported volume.

Specifies the NFS protocol version used by Direct NFS Client. Possible values are NFSv3NFSv4NFSv4.1, and pNFS. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version. Specify nfs_version as pNFS, if you want to use Direct NFS with Parallel NFS.

Verify Oracle dNFS Usage

Restart your Oracle 12c database and view the Oracle Alert file, if you have successfully  configured your environment you should see that Oracle is reporting that the Oracle dNFS library is in use as it’s version.

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0

If you are not seeing any reference to the use of Oracle Direct NFS you may need to enable it using the steps detailed here.

We should also be able to see that we are using a single channel, IP addresses and the optional server name from our oranfstab.

Direct NFS: channel id [0] path [] to filer [zfsa] via local [] is UP

From the alert log file we can also confirm thatOracle is using our oranfstab file rather than the /etc/mtab, if your oranfstab is not correct your will see defined in mtab rather than defined in oranfstab, time to go back and fix-it.

Direct NFS: attempting to mount /export/oradata on filer zfsa defined in oranfstab

Additionaly, you can query the database using the v$dnfs_servers view to confirm the configuration.

SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

————— ———————-     ———-
zfsa            /export/oradata NFSv3.0

SNMP (Simple Network Management Protocol)

Finally, before we can start to use Oracle HCC (Hybrid Columnar Compression) within our Oracle 12c database we need to check that our simulator has been configured to use SNMP.

From our Oracle ZFS Storage VMware simulator check the status of services.


And if green, use snmpget to check the response from our simulator, providing the IP address and the Oracle MIB (Management Information Bases) below.

snmpget -v1 -c public

If all OK, snmpget should return a Sun Storage string, this is required to allow us to use the HCC features which are limited to Oracle Exadata / storage platforms.

SNMPv2-SMI::enterprises. = STRING: “Sun Storage 7000”

If you have successfully built your simulator and configured your Oracle 12c environment you are now ready to try to try out some of the HCC features.

In the next Blog post we will attempt to create database objects within and outside of our Oracle ZFS Storage VMware simulator to check our Oracle 12c database is able to use HCC data.

Oracle 12c Multi-database refresh with Ansible

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c multiple databases clones with Ansible

Before we jump into the demonstration let me detail my architecture and explain what you are going to see, first the Architecture.

The Architecture

z-oracle will be used as my Ansible control machine
z-oracle1 is where my production Oracle 12c database is hosted
z-oracle2 thru to z-oracle7 are my development / test database servers.
FlashArray will be used to deliver the storage data services.Ansible_Demo

The Ansible Playbook

The database_clone Ansible playbook performs the following steps:

  1. Performs crash-consistent storage snapshot of production database using REST API.
    1. The storage snapshot is instant, has no impact to the running production database, and does not require any changes to production server or database configuration.

  2. Shutdown non-production database(s) and unmount database filesystems.
  3. Refresh non-production volume(s) from latest storage snapshot.
  4. Mount database filesystems and start database(s).
    1. At this point the non-production database(s) are exact copies of the production database with the same name as production but consuming no space.

  5. Rename non-Production database, datafiles and directories.

If you have watched the video you will have seen that the 6 non-production databases have all been refreshed from production in less than 2 1/2 minutes from a single Ansible playbook.

Visit to see more Ansible examples and also find examples for other DevOPs tools and languages including Python, PowerShell, Puppet…

Part 3: Oracle 12c Point-in-Time-Recovery

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c Point-in-Time-Recovery

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before simulating a the loss of datafile.

I then use another python script to determine the last snapshot before shutting the database down and rolling-back to the recovery point. Once the datafiles have been restored RMAN is used to roll-forward the database using the database log files performing a PITR (Point-in-Time-Recovery) with no data-loss.

Part 2: Oracle 12c database rewind

In this Blog post I will share another short video demonstrating how storage data services can be used to perform many common data management and CDM (Copy Data Management) tasks using simply scripting, DevOPS automation tools and Pure Storage REST API’s .

Oracle 12c database rewind

This video starts the same as a Part 1: Oracle 12c database reset by refreshing an Oracle 12c database, and then simulates a typical development workflow process, where development databases are refreshed from production and then updated with:

  • New application configuration and setup
  • Additional synthetic and manufactured data supplement production data sets
  • Data masking and cleansing scripts.
  • Housekeeping routines e.g. clean-up workflow / staging tables

The video shows me taking a crash-consistent snapshot using a python script I have developed (available at using the Pure Storage REST API’s.

The video then proceeds to perform further updates and changes, before rewinding the database back to the last crash-consistent snapshot enabling rapid, iterative testing.

Oracle dNFS Throughput testing

Oracle Direct NFS

In my previous Oracle dNFS Blog post I detailed how to configure Oracle 12cR2 to use Oracle dNFS multi-path, in this Blog I will share some test results that I achieved in my Oracle 12cR2 Pure Storage FlashBlade lab.

Oracle dNFS Flashblade configuration

Theoretical Limits

A Gigabit Ethernet network can transmit 1,000,000,000 bits per second, as we normally talk about X Bytes per second we need to divide this by 8 as there are 8 bits in a byte.

1GbE = (1,000,000,000 / 8) = 125,000,000 bytes/second

Unfortunately we can not use all 125,000,000 bytes/second to send data as we have multiple layers of overhead,  when using a default MTU of 1500 we can expect to experience ~6% overhead, this gives us:

1GbE= (1,000,000,000 / 8) less 6% overhead = 117,500,000 bytes/second

1 x  1GbE =   0.118GB/s
1 x 10GbE = 1.175GB/s
4 x 10GbE = 4.7GB/s
8 x 10GbE = 9.4GB/s  (2 Node RAC Cluster)

The dNFS Throughput Test

In the tests below I used Kevin Closson’s excellent SLOB with a custom configuration to produce Oracle FTS (Full Table Scans), many thanks to Frits Hoogland for sharing. This set-up allowed me to generate sufficient load to utilise the available bandwidth and see how close I can get to the ‘theoretical limits’.

Test 1 – Oracle 12cR2 dNFS disabled

OK, during this test we can see that Oracle EM Express is showing a single instance driving ~1.08GB/s IO Throughput

Oracle Express 12c
The Flashblade reporting throughput at ~1.09GB/s


Test 2 – Oracle 12cR2 dNFS enabled

For out next test, I have enabled dNFS with make -f dnfs_on but have not activated multipath, this was confirmed with the SQL below.

SELECT svrname, dirname, wtmax, rtmax FROM v$dnfs_servers;

-------        ------------------  ------ ------ /z-fbhosts_oradata   524288 524288 /z-fbhosts_orafra    524288 524288 /z-fbhosts_oraconfig 524288 524288

Oracle Express EM reports almost identical results with and without dNFS
And the Flashblade dashboard reports a similar result.

So lets enable multi-path and see the impact.

Test 3 – Oracle 12cR2 dNFS multi-path

So, lets install our oranfstab file and bounce the database and check again using the previous SQL script.

SELECT svrname, dirname, wtmax, rtmax FROM v$dnfs_servers; 
-------        ---------------- ------ ------ 
flashblade /z-fbhosts_oradata   524288 524288 
flashblade /z-fbhosts_orafra    524288 524288 
flashblade /z-fbhosts_oraconfig 524288 524288

We can see the Server Name has been picked up from the oranfstab rather than the mtab, we can also check that we are using multipath with the v$dnfs_channels view.

SELECT distinct svrname, path FROM v$dnfs_channels;
SVRNAME              PATH                                            
-------------------- -------------

Oracle EM Express is now showing we are using all 4 x 10GbE interfaces as expected and driving 4.03GB/s, also note the increase in Host CPU utilisation.

Ok, our final test will be to see if we get similar results with Oracle RAC

Test 4 – Oracle 12cR2 2 node RAC with dNFS multi-path

For this Oracle RAC test the slob.conf SQLNET_SERVICE_MAX parameter has been changed to 2 and to distribute the workload across the 2 nodes.

Oracle EM Express is showing that the workload has been evenly balanced across the 2 RAC nodes, achieving impressive throughput figures exceeding 8.3GB/s.

FB_RACThe Flashblade dashboard show that the 7 blades were able to deliver a consitent bandwidth of over 8.8GB/s for the Oracle workload utilising all 8 x 10GbE interfaces getting close to wire speed.

In my next Blog Post I plan to enable Jumbo Frames (JF) end-to-end and repeat the above tests.  With JF (MTU 9000) configured, I am hoping to see a ~5% improvement in bandwidth utilisation.

Oracle Linux Container Registry

You may have read one of my previous posts Oracle 12c on Docker where I described  how you can get hold of official Oracle images from the Docker Store.

In this post I will share how you can now use the Oracle Container Registry to obtain Docker images for all licensable Oracle products.

To access the Oracle Registry Server you must have an Oracle Single Sign-On (SSO) account, this is the same account you use for and etc..

Navigate to and login using your SSO credentials, once authenticated you will be presented with the ‘Explore’ webpage.

You can then need to select the required Business Area e.g. ‘Database’ and click the red ‘Continue’ button, you will then be presented with the ‘Oracle Standard Terms and Restrictions’ agreement, review and if acceptable click ‘Accept’ and the bottom of the page.

You will then be returned to the ‘Explorer’ and notice that the Oracle Standard Terms and Restrictions shows you acceptance, this is valid for 8 hours only.


Note: you will not be able to pull and image until you have accepted Oracle Standard Terms and Restrictions.

You are now ready to pull the required image by clicking the repository  e.g. ‘enterprise’ and copying & pasting the provided pull command e.g.

The Repository Detail page provides set-up and usage information for the selected image.

Ok, now I have outlined the process let me walk you through it.

1) Login to Registry

ronsmac:~ ronekins$ docker login
Login Succeeded

2a) Pull Oracle Linux

Select ‘OS’ from the ‘Explore Official Business Areas’  and click the red ‘Continue’ button for oraclelinux. As before, accept Oracle Terms.


Now copy the docker pull command and paste in in your Mac or Linux shell.


ronsmac:~ ronekins$ docker pull
Using default tag: latest
latest: Pulling from os/oraclelinux
80d2e45a33d8: Pull complete
Digest: sha256:d31c2987a6c427eeca70fb28ccefd6e788e96b3202dc715aa3c80bcb23673f6d
Status: Downloaded newer image for

2b) Pull Oracle Database 12c Enterprise Edition


ronsmac:~ ronekins$ docker pull
Using default tag: latest
latest: Pulling from database/enterprise
cbb9821ba51c: Pull complete
9bd4d110366e: Pull complete
af8b29651e27: Pull complete
4c242ab1add4: Pull complete
7bda1e55bd08: Pull complete
Digest: sha256:42809e491491d7f07a2aa76903bb5feabe3a0d23abcb7e680264074f043a604c
Status: Downloaded newer image for

3) Check Image size

Your Oracle two images are now available for use locally, you can check this with the docker image ls command. e.g.
ronsmac:~ ronekins$ docker image ls

REPOSITORY TAG IMAGE ID CREATED SIZE latest 6c33a25f4a29 5 weeks ago 229MB latest 12a359cd0528 2 months ago 3.44GB

Follow-up ‘Registry’ teaser

Within the ‘OS’ area Oracle provides a registry image which you can use to deploy a local registry so you can pull images and customise as required, the local registry also avoids the need the to authenticate every 8 hours, I will detail all of this in a follow-up post.

ronsmac:~ ronekins$ docker pull
Using default tag: latest
latest: Pulling from os/registry
a3ed95caeb02: Pull complete
89937cfc6593: Pull complete
bd07ebf08156: Pull complete
Digest: sha256:13d190c8838ebeb1e9cbf87b3edcc1fc6b6948d1b5d2200ec4dc64c638a56402
Status: Downloaded newer image for


One-Click Oracle 12c Database Clone


Earlier in the year at the excellent OUgf Finnish Oracle User Group Harmony 17 conference, I performed a Live demo where I refreshed an Oracle 12c database using an Ansible Playbook in just over a minute, 1min 16sec to be more precise.


Once I had completed my presentation Frits Hoogland mentioned that he had recently used and Blogged on Ansible-Sempahore , an Open Source alternative to Ansible Tower which provides a Web UI and API’s for launching Ansible Tasks.

On my return to the UK I read Frits’ Blog – How to install semaphore UI for running Ansible and the installation instructions on the GitHub site, and was soon up and running.

Why this Blog

In this Blog post I am going to share how I used Ansible-Semaphore to launch my Ansible playbook to perform an Oracle 12c database refresh with a single click.

Once I had Semaphore installed, I configure the Inventory, Environment, Key Store and Playbook Repositories and added my Task Templates all through the Semaphore UI, all pretty straight forward, so no need to screenshot it.

semaphore Task Templates
semaphore Task Templates

A powerful feature of Semaphore is it’s integration with GitHub, this ensures every time a Playbook is run the last version is used as the code is pulled back from the repository.

Running my ‘Oracle Database Clone’ Ansible playbook is as now as simple as clicking ‘run!’ (no Playbook or Environment Overrides are required for this playbook)


At the end of each run semaphore provides a Task Log which is retained and accessible from the dashboard for historic review.


There we have it a One-Click Oracle 12c Database refresh in just under 1 minute 15 seconds.

If you want to see more, check out my YouTube channel to watch a demo of the above.

Docker, Oracle 12c and Persistent Storage


If you have read my previous posts on Docker you may have already installed Docker and have an Oracle 12c database running in a Docker container, if not you may want to read my first post on Docker – Running Oracle 12.2.01 on Docker

However, since I wrote the above post Oracle have published Docker images for many of their products, these can be found at the Docker Store. The Oracle images on the Docker Store are not just a great resource and help accelerate adoption, but are also certified and supported by Oracle.

Before continuing with this post you need to check-out my posts on using the new Oracle 12c Docker images and the Pure Storage Docker plugin as this post can be considered the ‘Return of the Jedi’  in the series.

Oracle 12c with Persistent Storage

In this post I will create a Docker container running an Oracle 12c database called PSTG using 4 mount points /u01, /u02, /u03 & /u04 mounted from a Pure Storage FlashArray.

Database environment Layout

/u01 – OraInventory and Oracle 12c product directories
/u02 – Oracle Datafile
/u03 – Fast Recovery Area
/u04 – RedoLog files.

The 4 mount points will all be using persistent All Flash Storage providing low latency performance, space efficient crash consistent snapshots and requiring no local storage.

First, confirm that you have installed the Oracle 12c image using the the following:

$ docker images list

docker_imagesYou should be able see the store/oracle/database-enterprise Docker Image, if missing you need to read my previous post.

Ok, copy the db_env.dat file created in the previous post to a new environment file $ORACLE_SID_env.dat and update database name and password e.g.

## Copyright(c) Oracle Corporation 1998,2016. All rights reserved.##
##                                                                ##
##                   Docker OL7 db12c dat file                    ##
##                                                                ##
## Specify the basic DB parameters
## db sid (name)
## default : ORCL
## cannot be longer than 8 characters
## db passwd
## default : Oracle
## db domain
## default : localdomain
## db bundle
## default : basic
## valid : basic / high / extreme 
## (high and extreme are only available for enterprise edition)
## end

We will now create the 4 volumes using the docker volume create command and Pure Storage Docker plug-in providing name, size and label.

 docker volume create --driver=pure --opt size=20GB --name=PSTG-u01 --label=product
 docker volume create --driver=pure --opt size=100GB --name=PSTG-u02 --label=oradata
 docker volume create --driver=pure --opt size=200GB --name=PSTG-u03 --label=fast_recovery_area
 docker volume create --driver=pure --opt size=10GB --name=PSTG-u04 --label=redo

We can use the docker volume ls command with a filter to list the 4 database volumes we are interested in using the following syntax

$ docker volume ls -f name=environment


The Pure Storage Docker Plugin also supports docker volume inspect, from here we can see the labels which we defined earlier, great way to see where and what the mount is being used for.

$ docker volume inspect volume name


Now that we have installed the Oracle 12c Docker image and created the 4 volumes we are ready to create our Oracle 12c container using the docker run command with -v to present our newly volumes to our container.

docker run -d –env-file ./PSTG_env.dat -p 1527:1521 -p 5507:5500 -it –name PSTG -v PSTG-u01:/u01 -v PSTG-u02:/u02 -v PSTG-u03:/u03 -v PSTG-u04:/u04 –shm-size=”8g” store/oracle/database-enterprise:

Progress can be monitored using the docker logs command.

docker logs -f PSTG

Once the container has started we can now connect to it and check out our Oracle 12c environment using the docker exec command and container name

docker exec -it PSTG /bin/bash

From within our Oracle 12c ‘PSTG’ container we can see our Pure Storage volumes (/u01, /u02, /u03 & /u04) and the sizes we specified.


Now we are in our Docker container we can use SQLPlus to see data file locations, instance name, docker assigned hostname etc…



Docker Volume plugin for Pure Storage

You may have noticed that you can now download a Docker Volume Plugin for Pure Storage from the Docker Store. The Pure Storage Docker Volume plugin provides seamless integration with Docker containers, enabling persistent storage which is a must for many enterprise and database applications

Pure Docker Plug-in Installation

If you have not already installed docker you need to do this first, see my previous Blog Docker installation on Linux for details.

Ok, lest install the Pure Docker plug-in.

# docker plugin install store/purestorage/docker-plugin:2.0 –alias pure –grant-all-permissions
2.0: Pulling from store/purestorage/docker-plugin
313f5ed5ee16: Download complete
Digest: sha256:607837553036107dfccf14870e500fa3b3bdab343f111d0ac58af1b8351d7f31
Status: Downloaded newer image for store/purestorage/docker-plugin:2.0
Installed plugin store/purestorage/docker-plugin:2.0

Docker Plugin Management

We can manage Docker plugins with the docker plugin command e.g.

# docker plugin
Usage: docker plugin COMMAND
Manage plugins
create         Create a plugin from a rootfs and configuration. Plugin data directory must contain config.json and rootfs directory.
disable       Disable a plugin
enable        Enable a plugin
inspect       Display detailed information on one or more plugins
install         Install a plugin
ls                  List plugins
push            Push a plugin to a registry
rm               Remove one or more plugins
set               Change settings for a plugin
upgrade     Upgrade an existing plugin

Run ‘docker plugin COMMAND –help’ for more information on a command.

Let’s check out the status of our Pure Docker plugin with

# docker plugin ls
ID                          NAME                DESCRIPTION                                  ENABLED
0b4ab7efeb3f      pure:latest        Pure Storage plugin for Docker   true

Once installed the first thing you must do is to let the Volume plugin know about your Pure FlashArray, update /etc/pure-docker-plugin/pure.json to use your FlashArray’s management (not iSCSI) IP address and API Token.

Example pure.json file for FlashArray

Substitute values for the MgmtEndPoint and APIToken fields with values specific to your FlashArray.


Create a Volume

You can create a Pure Storage FlashArray volume from your Linux server using the docker volume create command.

# docker volume create –driver=pure -o size=100GB –name=ron101 –label=ron

You should now be able to see the newly created volume within the Pure Storage FlashArray.docke_FA

List a Volume

You can also use the docker volume ls command and filters to show available volumes.

# docker volume ls -f name=ron
pure                ron100
pure                ron101

Inspect a Volume

You can use the docker volume inspect command to get a bit more info and to also see where it’s being used.

[root@z-oracle pure-docker-plugin]#  docker volume inspect ron101
        "Driver": "pure:latest",
        "Labels": {
            "ron": ""
        "Mountpoint": "/var/lib/docker/plugins/55d3765fafd49e915dc95f791c6f06a0d4207e50c453c4b175a6f18e9f23ed12/rootfs",
        "Name": "ron101",
        "Options": {
            "size": "100GB"
        "Scope": "global"

Delete a Volume

The Pure Storage Volume plugin also supports volume delete.

# docker volume rm ron101
# docker volume ls -f name=ron

In this Blog I have shared how we can create, inspect and delete Docker Volumes using the Pure Storage Docker Volume plug-in, in my next post I will share you you can use these volumes for persistent database storage.

Check-out the Pure Storage Developer Community for more examples.