Docker, Oracle 12c and Persistent Storage

Introduction

If you have read my previous posts on Docker you may have already installed Docker and have an Oracle 12c database running in a Docker container, if not you may want to read my first post on Docker – Running Oracle 12.2.01 on Docker

However, since I wrote the above post Oracle have published Docker images for many of their products, these can be found at the Docker Store. The Oracle images on the Docker Store are not just a great resource and help accelerate adoption, but are also certified and supported by Oracle.

Before continuing with this post you need to check-out my posts on using the new Oracle 12c Docker images and the Pure Storage Docker plugin as this post can be considered the ‘Return of the Jedi’  in the series.

Oracle 12c with Persistent Storage

In this post I will create a Docker container running an Oracle 12c database called PSTG using 4 mount points /u01, /u02, /u03 & /u04 mounted from a Pure Storage FlashArray.

Database environment Layout

/u01 – OraInventory and Oracle 12c product directories
/u02 – Oracle Datafile
/u03 – Fast Recovery Area
/u04 – RedoLog files.

The 4 mount points will all be using persistent All Flash Storage providing low latency performance, space efficient crash consistent snapshots and requiring no local storage.

First, confirm that you have installed the Oracle 12c image using the the following:

$ docker images list

docker_imagesYou should be able see the store/oracle/database-enterprise 12.1.0.2 Docker Image, if missing you need to read my previous post.

Ok, copy the db_env.dat file created in the previous post to a new environment file $ORACLE_SID_env.dat and update database name and password e.g.

####################################################################
## Copyright(c) Oracle Corporation 1998,2016. All rights reserved.##
##                                                                ##
##                   Docker OL7 db12c dat file                    ##
##                                                                ##
####################################################################
##------------------------------------------------------------------
## Specify the basic DB parameters
##------------------------------------------------------------------
## db sid (name)
## default : ORCL
## cannot be longer than 8 characters
DB_SID=PSTG
## db passwd
## default : Oracle
DB_PASSWD=Oracle
## db domain
## default : localdomain
DB_DOMAIN=localdomain
## db bundle
## default : basic
## valid : basic / high / extreme 
## (high and extreme are only available for enterprise edition)
DB_BUNDLE=basic
## end

We will now create the 4 volumes using the docker volume create command and Pure Storage Docker plug-in providing name, size and label.

 docker volume create --driver=pure --opt size=20GB --name=PSTG-u01 --label=product
 docker volume create --driver=pure --opt size=100GB --name=PSTG-u02 --label=oradata
 docker volume create --driver=pure --opt size=200GB --name=PSTG-u03 --label=fast_recovery_area
 docker volume create --driver=pure --opt size=10GB --name=PSTG-u04 --label=redo

We can use the docker volume ls command with a filter to list the 4 database volumes we are interested in using the following syntax

$ docker volume ls -f name=environment

docker_vols

The Pure Storage Docker Plugin also supports docker volume inspect, from here we can see the labels which we defined earlier, great way to see where and what the mount is being used for.

$ docker volume inspect volume name

docker_inspect.png

Now that we have installed the Oracle 12c Docker image and created the 4 volumes we are ready to create our Oracle 12c container using the docker run command with -v to present our newly volumes to our container.

docker run -d –env-file ./PSTG_env.dat -p 1527:1521 -p 5507:5500 -it –name PSTG -v PSTG-u01:/u01 -v PSTG-u02:/u02 -v PSTG-u03:/u03 -v PSTG-u04:/u04 –shm-size=”8g” store/oracle/database-enterprise:12.1.0.2

Progress can be monitored using the docker logs command.

docker logs -f PSTG

Once the container has started we can now connect to it and check out our Oracle 12c environment using the docker exec command and container name

docker exec -it PSTG /bin/bash

From within our Oracle 12c ‘PSTG’ container we can see our Pure Storage volumes (/u01, /u02, /u03 & /u04) and the sizes we specified.

df.png

Now we are in our Docker container we can use SQLPlus to see data file locations, instance name, docker assigned hostname etc…

sqlplus

instance

Docker Volume plugin for Pure Storage

You may have noticed that you can now download a Docker Volume Plugin for Pure Storage from the Docker Store. The Pure Storage Docker Volume plugin provides seamless integration with Docker containers, enabling persistent storage which is a must for many enterprise and database applications

Installation of the Docker Volume is extremely simple and is well documented at the Docker Store.

Once installed the first thing you must do is to let the Volume plugin know about your Pure FlashArray, update /etc/pure-docker-plugin/pure.json to use your FlashArray’s management (not iSCSI) IP address and API Token.

Starting pure-docker-plugin

You can now start the pure-docker-plugin using systemctl start e.g.

[root@z-oracle ~]#  systemctl start pure-docker-plugin

You can also check the log file using systemctl status e.g.

[root@z-oracle ~]#  systemctl status pure-docker-plugin -l

Create a Volume

You can create a Pure Storage FlashArray volume from your Linux server using the docker volume create command.

[root@z-oracle ~]# docker volume create –driver=pure -o size=100GB –name=ron101 –label=ron
ron101

You should now be able to see the newly created volume within the Pure Storage FlashArray.docke_FA

List a Volume

You can also use the docker volume ls command and filters to show available volumes.

[root@z-oracle ~]# docker volume ls -f name=ron
DRIVER              VOLUME NAME
pure                ron100
pure                ron101

Inspect a Volume

You can use the docker volume inspect command to get a bit more info and to also see where it’s being used.

[root@z-oracle ~]# docker volume inspect ron101
[
{
“Name”: “ron101”,
“Driver”: “pure”,
“Mountpoint”: “”,
“Labels”: {
“ron”: “”
},
“Scope”: “global”
}
]

When the Docker Volume is used the Mountpoint will be visible e.g.

[root@z-oracle ~]# docker volume inspect ron100
[
{
“Name”: “ron100”,
“Driver”: “pure”,
“Mountpoint”: “/pure-docker-mounts/z-oracle/ron100”,
“Labels”: {
“ron”: “”
},
“Scope”: “global”
}
]

Delete a Volume

The Pure Storage Volume plugin also supports volume delete.

[root@z-oracle ~]# docker volume rm ron101
ron101
[root@z-oracle ~]# docker volume ls -f name=ron
DRIVER              VOLUME NAME

In this Blog I have shared how we can create, inspect and delete Docker Volumes using the Pure Storage Docker Volume plug-in, in my next post I will share you you can use these volumes for persistent database storage.

Check-out the Pure Storage Developer Community for more examples.

Automate to avoid database cloning disasters.

“Accidentally destroyed production database on first day of a job”

Wow, that headline grabbed my attention.

Earlier this week you may have seen an article reported by The Register about a post in  reddit from a junior software developer going by the name of  “cscareerthrowaway567”, who on his/her first day destroyed a Production database and lost their job.

I was basically given a document detailing how to setup my local development environment. Which involves run a small script to create my own personal DB instance from some test data. After running the command i was supposed to copy the database url/password/username outputted by the command and configure my dev environment to point to that database. Unfortunately instead of copying the values outputted by the tool, i instead for whatever reason used the values the document had.

Unfortunately apparently those values were actually for the production database (why they are documented in the dev setup guide i have no idea). Then from my understanding that the tests add fake data, and clear existing data between test runs which basically cleared all the data from the production database. Honestly i had no idea what i did and it wasn’t about 30 or so minutes after did someone actually figure out/realize what i did.

Now, we can not be sure if the story by “cscareerthrowaway567” is true or not but it does provide an great example of the potential dangers of manual database cloning.

The story raises many issues around process, security, change management, training and not least why someone thought it was a good idea to include Production account details in a database cloning document ?

However, this could have easily been avoided through the use of Automation or a CDM (Copy Data Management) tools e.g. Catalogic, Actifio, Delphix etc..

Many modern All-Flash Arrays (AFA) provide powerful REST API’s which can be called from various programming and scripting languages e.g. Python, Java, Perl, PowerShell etc… to create custom solutions.

AFA Storage API’s can also be consumed by DevOPS Automation and Provisioning software e.g. Ansible, Chef, Puppet etc.. to provide full-stack or end-to-end Automation of Database Cloning, improving security and also removing the chance of human error.

Check out the Pure Storage Developer Community area for code examples.

How to resize an XFS filesystem

A question which I frequently get asked is, how do I a resize my Oracle XFS file system ?

As I needed to resize an Oracle FRA area today, I thought this would make a great topic for a Blog post.

Ok, lets start be checking the current size and geometry using the Linux df -h and xfs_growfs -n commands.

[root@z-oracle ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
..
/dev/mapper/psta-orafra   1.0T   33M  1.0T   1% /u04/app/oracle/fast_recovery_area
..

[root@z-oracle ~]# xfs_growfs /dev/mapper/psta-orafra -n
meta-data=/dev/mapper/psta-orafra isize=256 agcount=4, agsize=67108864 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=268435456, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=131072, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Now run the multipath command to check the current size of the LUN,  look for the friendly device name within the list of devices.

[root@z-oracle ~]# multipath -ll

psta-orafra (3624a937050c939582b0f46c0000a8f84) dm-17 PURE ,FlashArray
size=1.0T features=’0′ hwhandler=’0′ wp=rw
`-+- policy=’queue-length 0′ prio=1 status=active
|- 10:0:3:20 sdgl 132:16 active ready running
|- 10:0:4:20 sdiq 135:160 active ready running
|- 10:0:5:20 sdkq 66:480 active ready running
|- 10:0:6:20 sdmo 70:256 active ready running
|- 10:0:7:20 sdom 129:288 active ready running
|- 1:0:0:20 sdai 66:32 active ready running
|- 1:0:1:20 sdcr 69:240 active ready running
|- 1:0:2:20 sder 129:48 active ready running
|- 1:0:3:20 sdgr 132:112 active ready running
|- 1:0:4:20 sdil 135:80 active ready running
|- 1:0:5:20 sdkk 66:384 active ready running
|- 1:0:6:20 sdmk 69:448 active ready running
|- 1:0:7:20 sdop 129:336 active ready running
|- 10:0:0:20 sdau 66:224 active ready running
|- 10:0:1:20 sdck 69:128 active ready running
`- 10:0:2:20 sdel 128:208 active ready running

Now resize the volume using the Pure FlashArray UI, command line or REST API.

Screen Shot 2017-05-11 at 10.44.03

We now need to perform a rescan of the SCSI devices on our Linux server to identify any LUNS which have been resized.

[root@z-oracle ~]# rescan-scsi-bus.sh -s
Scanning SCSI subsystem for new devices
Searching for resized LUNs

We can now resize the multipath device using the following command:

[root@z-oracle mapper]# multipathd -k’resize map /dev/dm-17
ok

Great, ok now use the Linux command xfs_growfs to extend the file system, note if you do not specify -D xfs_growfs will grow to use all available space.

[root@z-oracle ~]# xfs_growfs /dev/mapper/psta-orafra
meta-data=/dev/mapper/psta-orafra isize=256 agcount=4, agsize=67108864 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=268435456, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=131072, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 268435456 to 536870912

Ok, lets check the results

[root@z-oracle ~]# df -h
Filesystem Size Used Avail Use% Mounted on

/dev/mapper/psta-orafra 2.0T 33M 2.0T 1% /u04/app/oracle/fast_recovery_area
..

As you see from above, within a few minutes I have been able to increase my Oracle fast_recovery_area from 1TB to 2TB.