Oracle VM Block Storage
In my last Blog post I presented NFS storage to my Oracle VM Server, in this post I will show how block storage can also be configured for use with OVM.
Create Pure FlashArray volume
Configure OVM – SAN Servers
Before we present our new Pure Storage block device to our Oracle VM Server we first need to update the /etc/multipath.conf file to include any vendor specific settings and restart the multipath service. Below is the entry for a Pure Storage FlashArray.
path_selector "round-robin 0"
hardware_handler "1 alua"
[root@z-ovm ~]# service multipathd reload
Now Logon to your Oracle VM Manager and refresh your previously created SAN Server to discover our newly created volume(s).
Click OK to confirm Refresh, the new LUN(s) should now be visible in Oracle VM Manager Storage tab.
As I have presented the block storage over iSCSI we can see IQN (iSCSI Qualified Name) in the Storage Targets, this was set-up in Part 4 OVM Storage.
ID: is the UUID is a universally unique identifier that Oracle VM Manager assigns to a physical disk.
Page83 ID: The unique SCSI identifier for the physical disk, for a Pure Storage FlashArray this will be set as the Vendor ID + lowercase volume serial number e.g 3624a9370 + 513519106E354B37002EB1D1.
NOTE: Oracle VM does not currently support User Friendly Names (see extract from /etc/multipath.conf.
## IMPORTANT for OVS this must be no. OVS does not support user friendly
## names and instead uses the WWIDs as names.
getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
Therefore the User Friendly Name will be same as the Page83 ID.
An optional step now is to Right Click the volume, and select Edit to provide a more meaningful Name and Description. Once updated Right Click again and this time select Refresh.
Create Oracle VM Repository
From the Repository tab click on the Green Plus to create a new Repository, enter a Repository Name, select Physical Disk, Description, for non-Clustered deployments select Server Pool None and click on the Magnifying Glass to select the Physical Disk previously added. Then click Next.
Select SAN Server and Name, check the User Friendly Name is as expected.
Oracle VM Server
If we now logon to the Oracle VM Server as root we should now see our newly created volume(s).
[root@z-ovm ~]# multipath -ll
3624a9370513519106e354b37002eb1d1 dm-1 PURE,FlashArray
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
|- 47:0:0:1 sdb 8:16 active ready running
|- 48:0:0:1 sdc 8:32 active ready running
|- 49:0:0:1 sdd 8:48 active ready running
`- 50:0:0:1 sde 8:64 active ready running
[root@z-ovm ~]# ls -l /dev/mapper/
lrwxrwxrwx 1 root root 7 Jan 3 15:26 3624a9370513519106e354b37002eb1d1 -> ../dm-1
Each repository contains the following directory structure /OVS/Repositories/*/
- Contains pre-configured sets of virtual machines.
- Contains ISO images which can be used by VM’s
- Contains virtual machine templates
- Contains dedicated or shared virtual disks
- Contains virtual machine configuration files
However before we logoff our Oracle VM server lets use findmnt to check that our NFS and OCFS2 filesystems mounts look OK. In the example below we can see both our NFS export and block devices as expected.
[root@z-ovm mapper]# findmnt -t nfs
TARGET SOURCE FSTYPE OPTIONS
/OVS/Repositories/0004fb00000300009e6780edf21a0ee5 192.168.4.100:/z-ovm_images nfs rw,relatime,vers=3,rsize=52
[root@z-ovm mapper]# findmnt -t ocfs2
TARGET SOURCE FSTYPE OPTIONS
/OVS/Repositories/0004fb0000030000b72890166aa11f29 /dev/mapper/3624a9370513519106e354b37002eb1d1 ocfs2 rw,relatime
You can read-up on OCFS2 at: https://oss.oracle.com/projects/ocfs2/documentation/Follow @RonEkins