Oracle VM

Oracle VM 3.4.6 : Part 6 – OVM Block Storage

Oracle VM Block Storage

In my last Blog post I presented NFS storage to my Oracle VM Server, in this post I will show how block storage can also be configured for use with OVM.

Create Pure FlashArray volume

Our first step is to create a volume and connect it to our Oracle VM Server. In this example I have created a 1TB volume on one of my lab Pure Storage FlashArrays and connected it to my OVM Server.

Create Volume
Volume details
Connected Hosts

Configure OVM – SAN Servers

Before we present our new Pure Storage block device to our Oracle VM Server we first need to update the /etc/multipath.conf file to include any vendor specific settings and restart the multipath service. Below is the entry for a Pure Storage FlashArray.

        device {
                vendor                "PURE"
                product               "FlashArray"
                path_selector         "queue-length 0"
                path_grouping_policy  group_by_prio
                path_checker          tur
                fast_io_fail_tmo      10
                dev_loss_tmo          60
                no_path_retry         0
                hardware_handler      "1 alua"
                prio                  alua
                failback              immediate
                user_friendly_names   no
        }
[root@z-ovm ~]# service multipathd reload

You can check that your changes have been applied with the multipathd show config command. You man find it useful to direct output to a file and then using view to search for your device. e.g.

[root@z-ovm ~]# multipathd show config > r.r
[root@z-ovm ~]# view r.r
        device {
                vendor "PURE"
                product "FlashArray"
                path_grouping_policy group_by_prio
                getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
                path_selector "queue-length 0"
                path_checker tur
                features "0"
                hardware_handler "1 alua"
                prio alua
                failback immediate
                rr_weight uniform
                no_path_retry fail
                rr_min_io 1000
                rr_min_io_rq 1
                fast_io_fail_tmo 10
                dev_loss_tmo 60
                user_friendly_names no
        }

Now Logon to your Oracle VM Manager and refresh your previously created SAN Server to discover our newly created volume(s).

Refresh SAN Servers

Click OK to confirm Refresh, the new LUN(s) should now be visible in Oracle VM Manager Storage tab.

As I have presented the block storage over iSCSI we can see IQN (iSCSI Qualified Name) in the Storage Targets, this was set-up in Part 4 OVM Storage.

ID: is the UUID is a universally unique identifier that Oracle VM Manager assigns to a physical disk.

Page83 ID: The unique SCSI identifier for the physical disk, for a Pure Storage FlashArray this will be set as the Vendor ID + lowercase volume serial number e.g 3624a9370 + 513519106E354B37002EB1D1.

NOTE: Oracle VM does not currently support User Friendly Names (see extract from /etc/multipath.conf.

## IMPORTANT for OVS this must be no. OVS does not support user friendly
## names and instead uses the WWIDs as names.
defaults {
        user_friendly_names no
        getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
        no_path_retry 10
}

Therefore the User Friendly Name will be same as the Page83 ID.

An optional step now is to Right Click the volume, and select Edit to provide a more meaningful Name and Description. Once updated Right Click again and this time select Refresh.

Updated volume names

Create Oracle VM Repository

From the Repository tab click on the Green Plus to create a new Repository, enter a Repository Name, select Physical Disk, Description, for non-Clustered deployments select Server Pool None and click on the Magnifying Glass to select the Physical Disk previously added. Then click Next.

Select SAN Server and Name, check the User Friendly Name is as expected.

Create a Repository: Select Physical Disk

Oracle VM Server

If we now logon to the Oracle VM Server as root we should now see our newly created volume(s).

[root@z-ovm ~]# multipath -ll
3624a9370513519106e354b37002eb1d1 dm-1 PURE,FlashArray
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 47:0:0:1 sdb 8:16 active ready running
  |- 48:0:0:1 sdc 8:32 active ready running
  |- 49:0:0:1 sdd 8:48 active ready running
  `- 50:0:0:1 sde 8:64 active ready running
[root@z-ovm ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 Jan  3 15:26 3624a9370513519106e354b37002eb1d1 -> ../dm-1

Each repository contains the following directory structure /OVS/Repositories/*/

  • Assemblies
    • Contains pre-configured sets of virtual machines.
  • ISOs
    • Contains ISO images which can be used by VM’s
  • Templates
    • Contains virtual machine templates
  • VirtualDisks
    • Contains dedicated or shared virtual disks
  • VirtualMachines
    • Contains virtual machine configuration files

However before we logoff our Oracle VM server lets use findmnt to check that our NFS and OCFS2 filesystems mounts look OK. In the example below we can see both our NFS export and block devices as expected.

[root@z-ovm mapper]# findmnt -t nfs
TARGET                                             SOURCE                      FSTYPE OPTIONS
/OVS/Repositories/0004fb00000300009e6780edf21a0ee5 192.168.4.100:/z-ovm_images nfs    rw,relatime,vers=3,rsize=52

[root@z-ovm mapper]# findmnt -t ocfs2
TARGET                                            SOURCE                                       FSTYPE OPTIONS
/OVS/Repositories/0004fb0000030000b72890166aa11f29                                                   /dev/mapper/3624a9370513519106e354b37002eb1d1                                                                                              ocfs2  rw,relatime

You can read-up on OCFS2 at: https://oss.oracle.com/projects/ocfs2/documentation/

Advertisements
Oracle VM

Oracle VM 3.4.6 : Part 4 – OVM Storage

Configuring OVM Storage

If you have followed Part 1, Part 2 and Part 3 you should now have a working networked OVM Server and Manager ready to have storage presented to it.

Before you start, logon to the OVM Server and add your NFS and / or iSCSI fully qualified domain named servers to the /etc/host file to avoid hitting OVMRU_000082E.

Can Not Add NFS Server To Oracle VM Manager: OVMRU_000082E Either the hostname or IP address is invalid (Doc ID 2294262.1)

Navigate to the Storage tab, from here we can configure SAN Servers (Block) and File Servers (NFS)

Storage – File Servers

NFS Storage

To add NFS Storage, select File Servers and click the folder icon with the green plus, this should show the Discover File Server.. tag.

Enter NFS Server Name, storage network IP address, Description

File Server Parameters

Move OVM server(s) to Selected Admin Server(s) and click Next.

Add Admin Servers

Move OVM server(s) to Selected Refresh Server(s) and click Next.

Select Refresh Servers

Block Storage (iSCSI)

Storage – SAN Servers

Enter SAN Server Name and Description, as I am planning to use iSCSI there is no need to change Storage Type, click Next.

Check you do not have any leading or trailing spaces in your description, otherwise you will encounter an OVMRU_000096E error.

OVMRU_000096E Cannot set attribute Description of object IscsiStorageArray to value [ xxx ]. Leading or trailing spaces are not allowed.

Enter Access Host(s) and Click Next.

Create Access Host

Repeat above step for multipathing, and then click Next

Access Information

Select OVM Server(s) and click Next

Add Admin Servers

Select the Default access group and click the pencil icon to edit Access Group.

Manage Access Group

Click on the Storage Initiators tab and move the generic Oracle storage initiator to Selected Storage Initiators and click OK.

Edit Access Group

Now click Finish to complete the set-up.

Ansible

Ansible 2.6 Now available

At the beginning of July, Ansible 2.6 was released, this release depreciates a few commands, fixes some issues and extends Cloud support for Amazon, Azure & Google.

Ansible 2.6 also includes a few new and updated Pure Storage FlashArray and for the first time FlashBlade modules:

Pure Storage FlashArray

purefa_host – Enable mixed protocol hosts
purefa_hg – Modify existing hostgroups
purefa_ds – Manage Directory Service configuration
purefa_facts – Gather Facts information
purefa_pgsnap – Manage Protection Group snapshots

Pure Storage FlashBlade

purefb_fs – Manage filessystems
purefb_snap – Manage filesystem Snapshots

Ansible2.6.png

Pure Code Developer Community

Visit the Pure Code Developer Community and click on the ‘Ansible Playbook Examples’ card to be directed to the public GitHub repository for example playbooks.