Oracle VM 3.4.6 : Part 6 – OVM Block Storage

Oracle VM Block Storage

In my last Blog post I presented NFS storage to my Oracle VM Server, in this post I will show how block storage can also be configured for use with OVM.

Create Pure FlashArray volume

Our first step is to create a volume and connect it to our Oracle VM Server. In this example I have created a 1TB volume on one of my lab Pure Storage FlashArrays and connected it to my OVM Server.

Create Volume
Volume details
Connected Hosts

Configure OVM – SAN Servers

Before we present our new Pure Storage block device to our Oracle VM Server we first need to update the /etc/multipath.conf file to include any vendor specific settings and restart the multipath service. Below is the entry for a Pure Storage FlashArray.

        device {
                vendor                "PURE"
                product               "FlashArray"
                path_selector         "round-robin 0"
                path_grouping_policy  group_by_prio
                path_checker          tur
                fast_io_fail_tmo      10
                dev_loss_tmo          60
                no_path_retry         0
                hardware_handler      "1 alua"
                prio                  alua
                failback              immediate
                user_friendly_names   no
        }
[root@z-ovm ~]# service multipathd reload

Now Logon to your Oracle VM Manager and refresh your previously created SAN Server to discover our newly created volume(s).

Refresh SAN Servers

Click OK to confirm Refresh, the new LUN(s) should now be visible in Oracle VM Manager Storage tab.

As I have presented the block storage over iSCSI we can see IQN (iSCSI Qualified Name) in the Storage Targets, this was set-up in Part 4 OVM Storage.

ID: is the UUID is a universally unique identifier that Oracle VM Manager assigns to a physical disk.

Page83 ID: The unique SCSI identifier for the physical disk, for a Pure Storage FlashArray this will be set as the Vendor ID + lowercase volume serial number e.g 3624a9370 + 513519106E354B37002EB1D1.

NOTE: Oracle VM does not currently support User Friendly Names (see extract from /etc/multipath.conf.

## IMPORTANT for OVS this must be no. OVS does not support user friendly
## names and instead uses the WWIDs as names.
defaults {
        user_friendly_names no
        getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
        no_path_retry 10
}

Therefore the User Friendly Name will be same as the Page83 ID.

An optional step now is to Right Click the volume, and select Edit to provide a more meaningful Name and Description. Once updated Right Click again and this time select Refresh.

Updated volume names

Create Oracle VM Repository

From the Repository tab click on the Green Plus to create a new Repository, enter a Repository Name, select Physical Disk, Description, for non-Clustered deployments select Server Pool None and click on the Magnifying Glass to select the Physical Disk previously added. Then click Next.

Select SAN Server and Name, check the User Friendly Name is as expected.

Create a Repository: Select Physical Disk

Oracle VM Server

If we now logon to the Oracle VM Server as root we should now see our newly created volume(s).

[root@z-ovm ~]# multipath -ll
3624a9370513519106e354b37002eb1d1 dm-1 PURE,FlashArray
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 47:0:0:1 sdb 8:16 active ready running
  |- 48:0:0:1 sdc 8:32 active ready running
  |- 49:0:0:1 sdd 8:48 active ready running
  `- 50:0:0:1 sde 8:64 active ready running
[root@z-ovm ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 Jan  3 15:26 3624a9370513519106e354b37002eb1d1 -> ../dm-1

Each repository contains the following directory structure /OVS/Repositories/*/

  • Assemblies
    • Contains pre-configured sets of virtual machines.
  • ISOs
    • Contains ISO images which can be used by VM’s
  • Templates
    • Contains virtual machine templates
  • VirtualDisks
    • Contains dedicated or shared virtual disks
  • VirtualMachines
    • Contains virtual machine configuration files

However before we logoff our Oracle VM server lets use findmnt to check that our NFS and OCFS2 filesystems mounts look OK. In the example below we can see both our NFS export and block devices as expected.

[root@z-ovm mapper]# findmnt -t nfs
TARGET                                             SOURCE                      FSTYPE OPTIONS
/OVS/Repositories/0004fb00000300009e6780edf21a0ee5 192.168.4.100:/z-ovm_images nfs    rw,relatime,vers=3,rsize=52

[root@z-ovm mapper]# findmnt -t ocfs2
TARGET                                            SOURCE                                       FSTYPE OPTIONS
/OVS/Repositories/0004fb0000030000b72890166aa11f29                                                   /dev/mapper/3624a9370513519106e354b37002eb1d1                                                                                              ocfs2  rw,relatime

You can read-up on OCFS2 at: https://oss.oracle.com/projects/ocfs2/documentation/

Advertisements

Oracle VM 3.4.6 : Part 5 – OVM NFS Storage

Oracle VM NFS Storage

If you been following my series of Blog Post on OVM you should by now have a working OVM Server and Manager. However, if you a new to this series you can catch-up using the links below:

Pure Storage FlashBlade

In this Blog post we will create our first OVM Repository for Oracle VM images using NFS storage.

Our first step is to create an NFS File System and export it to our OVM Server. In the example I am creating a 5TB file system on one of my lab Pure Storage FlashBlades with snapshots enabled and access limited to my OVM Server.

Create File System

Oracle VM Manager File Server

Logon to your Oracle VM Manager and refresh your previously created NFS File Server to discover our newly created NFS export.

Refresh File Server

Click OK to confirm Refresh, our new file system should now be visible in Oracle VM Manager Storage tab.

File System – Default Name

An optional step now is to Right Click the file system, and select Edit to provide a more meaning full name and description. Once updated Right Click again and this time select Refresh.

Updated and refreshed File System

Create Oracle VM Repository

Click on the Green Plus to create a new Repository, enter a Repository Name, default Network File Server, Description and click on the Magnifying Glass to select the Network File System previously added. Then click Next.

Repository Information
Create a Repository: Select Network File System

Select OVM Server and move it to Present to Server(s) and click Finish.

Present to Servers

Oracle VM Server

If we now logon to the Oracle VM Server as root we can now see our newly created NFS mount point, and the Oracle VM created directory structure e.g.

Repository 
- Assemblies
- ISOs
- Templates
- VirtualDisks
- VirtualMachines
z-ovm_images – 5TB FlashBlade File System
Repository directory structure

In my next Blog I will create another Oracle VM repository for my planned Oracle VM VirtualMachine but this time using iSCSI block storage.

Oracle VM 3.4.6 : Part 4 – OVM Storage

Configuring OVM Storage

If you have followed Part 1, Part 2 and Part 3 you should now have a working networked OVM Server and Manager ready to have storage presented to it.

Before you start, logon to the OVM Server and add your NFS and / or iSCSI fully qualified domain named servers to the /etc/host file to avoid hitting OVMRU_000082E.

Can Not Add NFS Server To Oracle VM Manager: OVMRU_000082E Either the hostname or IP address is invalid (Doc ID 2294262.1)

Navigate to the Storage tab, from here we can configure SAN Servers (Block) and File Servers (NFS)

Storage – File Servers

NFS Storage

To add NFS Storage, select File Servers and click the folder icon with the green plus, this should show the Discover File Server.. tag.

Enter NFS Server Name, storage network IP address, Description

File Server Parameters

Move OVM server(s) to Selected Admin Server(s) and click Next.

Add Admin Servers

Move OVM server(s) to Selected Refresh Server(s) and click Next.

Select Refresh Servers

Block Storage (iSCSI)

Storage – SAN Servers

Enter SAN Server Name and Description, as I am planning to use iSCSI there is no need to change Storage Type, click Next.

Check you do not have any leading or trailing spaces in your description, otherwise you will encounter an OVMRU_000096E error.

OVMRU_000096E Cannot set attribute Description of object IscsiStorageArray to value [ xxx ]. Leading or trailing spaces are not allowed.

Enter Access Host(s) and Click Next.

Create Access Host

Repeat above step for multipathing, and then click Next

Access Information

Select OVM Server(s) and click Next

Add Admin Servers

Select the Default access group and click the pencil icon to edit Access Group.

Manage Access Group

Click on the Storage Initiators tab and move the generic Oracle storage initiator to Selected Storage Initiators and click OK.

Edit Access Group

Now click Finish to complete the set-up.

On-line migration of Oracle 18c filesystem datafiles to Oracle ASM

Background

Prior to Oracle 12c moving non-ASM data files required taking data files ‘offline’ and negotiating an outage with the business, since Oracle 12c this is has no longer been the case.

I was recently challenged with moving a good size filesystem 18c Oracle database from an existing storage platform to a new storage array.

Online_Data_Move

Prepare ASM Disk Groups

Use sqlplus to create the ASM disk group directories e.g.
$ export ORACLE_SID=+ASM
$ sqlplus / as sysasm

Disk Group +DATA

SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/DATAFILE’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/CHANGETRACKING’;
SQL> ALTER DISKGROUP DATA ADD DIRECTORY ‘+DATA/PSTG/TEMPFILE’;

Disk Group +CONTROL_REDO

SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/CONTROLFILE’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/ONLINELOG’;
SQL> ALTER DISKGROUP CONTROL_REDO ADD DIRECTORY ‘+CONTROL_REDO/PSTG/PARAMETERFILE’;

Disk Group +FRA

SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG’;
SQL> ALTER DISKGROUP FRA ADD DIRECTORY ‘+FRA/PSTG/ARCHIVELOG’;

Identify Datafiles

column dname heading ‘Data File Name’ format a45
column filename heading ‘BCT File Name’ format a45
column tablespace_name heading ‘Tablespace Name’ format a25
column data_mb heading ‘Data MB’ format a10

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files

Data File Name                             Tablespace Name Data MB
———————————————-  ————————- ——
/u02/oradata/PSTG/system01.dbf             SYSTEM  880
/u02/oradata/PSTG/sysaux01.dbf             SYSAUX  990
/u02/oradata/PSTG/sh.dbf                                   SH  512,000
/u02/oradata/PSTG/users01.dbf                  USERS  5
/u02/oradata/PSTG/soe.dbf                              SOE  512,000
/u02/oradata/PSTG/undotbs01.dbf    UNDOTBS1  400

On-line Datafile move

I have a previously blogged on using the Oracle 12c ALTER DATABASE MOVE DATAFILE command and you can see the full syntax  on the link above.

SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/system01.dbf’ TO ‘+DATA/PSTG/DATAFILE/system01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sysaux01.dbf’ TO ‘+DATA/PSTG/DATAFILE/sysaux01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/sh.dbf’ TO ‘+DATA/PSTG/DATAFILE/sh.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/users01.dbf’ TO ‘+DATA/PSTG/DATAFILE/users01.dbf’;
SQL> ALTER DATABASE MOVE DATAFILE ‘/u02/oradata/PSTG/undotbs01.dbf’ TO ‘+DATA/PSTG/DATAFILE/undotbs01.dbf’;

Monitor Move Progress

Below is a query which uses the V$SESSION_LONOPS view to report progress of the Online data file move operations.

column st heading ‘Start Time’ format a25
column time_remaining heading ‘Time|Remaining’
column elapsed_seconds heading ‘Elaspsed|Seconds’
column sofar heading ‘Sofar’ format a10
column total heading ‘Total’ format a10
column progress heading ‘Progress’ format a10
column message heading ‘Message’ format a100

select to_char(start_time,’DD-MON-YY hh24:mi:ss’) as st, time_remaining, elapsed_seconds, to_char(SOFAR/1024/1024,’999,999′) || ‘MB’ as sofar, to_char(TOTALWORK/1024/1024,’999,999′) || ‘MB’ as total, to_char((SOFAR/TOTALWORK)*100,’999′) || ‘%’ as progress, message from V$SESSION_LONGOPS;

Example Output

09-AUG-18 14:58:33 0 7 880MB 880MB 100%
Online data file move: data file 1: 922746880 out of 922746880 bytes done

09-AUG-18 15:01:07 0 7 990MB 990MB 100%
Online data file move: data file 3: 1038090240 out of 1038090240 bytes done

09-AUG-18 15:06:34 0 2767 512,000MB 512,000MB 100%
Online data file move: data file 5: 536870912000 out of 536870912000 bytes done

09-AUG-18 15:57:07 2757 38 6,962MB 512,000MB 1%
Online data file move: data file 2: 7300186112 out of 536870912000 bytes done

Updated Data file locations

We can check the file haven been relocated using the same query we ran earlier to identify the datafiles.

select file_name as dname, tablespace_name, to_char(bytes/1024/1024,’999,999′) as data_mb from dba_data_files;

Data File Name Tablespace Name Data MB
——————————————— ————————- ——–
+DATA/PSTG/DATAFILE/system01.dbf SYSTEM 880
+DATA/PSTG/DATAFILE/sysaux01.dbf SYSAUX 1,040
+DATA/PSTG/DATAFILE/sh.dbf SH 512,000
+DATA/PSTG/DATAFILE/users01.dbf USERS 5
+DATA/PSTG/DATAFILE/soe.dbf SOE 512,000
+DATA/PSTG/DATAFILE/undotbs01.dbf UNDOTBS1 400

Creating new Oracle 18c ASM Disk Groups

Creating New Volumes

Create the new ASM volume(s) using your storage platform, for this blog I have created 3 volumes on a Pure FlashArray, simply providing names and sizes.

FA_createVol
Create new ASM Volume(s)

Note the Volume serial numbers,  these will be used in the multipath configuration wwid (World Wide Identifier)

FA_vols
New ASM volumes

SCSI storage device serial numbers are universally unique. Each one starts with a 36bit (9 hexadecimal digit) vendor ID. Because the vendor ID is always the same for Pure Storage FlashArray volumes (3624a9370), it is omitted from the CLI and GUI displays.

Linux Configuration

Create multipath entries in  ‘/etc/multipath.conf’ for the 3 new volumes, in the examples below the wwid (World Wide Identifier) is set to Vendor ID + Serial number. e.g. ‘3624a9370’ + ‘ff551cc46aee54bf9000126ab’.
Note, the ‘wwid’ needs to be in lowercase and the ‘alias’ name for ASM disks needs to be less than 30 characters, alphanumeric and only use the ‘_ ‘ special character. 

multipath

Update multipath Configuration

Flush unused multipath devices

# multipath -F

Reload multipath configuration:

# service multipathd reload

Load and display multipath configuration, device mapper and other components

# multipath -v2
# multipath -ll

Oracle ASM Disk Discovery

Before we update our UDEV rules, lets check our Oracle ASM discovery disk string ‘ASM_DISKSTRING’.

Using asmcmd

ASMCMD> dsget
parameter:/u01/shared_config/ocr_configuration,  /dev/pureasm
profile:/u01/shared_config/ocr_configuration, /dev/pureasm

Using sqlplus

SQL> show parameter ASM_DISKSTRING;

asm_diskstring string /u01/shared_config/ocr_configuration, /dev/pureasm

We can now edit or create a UDEV rules file to reflect the ASM discovery string destination, the SYMLINK prefixes .

udev
/etc/udev/rules.d/99-oracle-asmdevices.rules

Reload UDEV rules as the root user using ‘udevadm trigger’ to set ownership and create links to the ASM disk discovery directory.

asmdisks
ls -la /dev/pureasm

We can now ready to create the ASM Disk Groups using the SQLPlus or the Oracle ASM Configuration Assistant ‘asmca’ UI.

I repeated this 3 times, creating +DATA. +FRA and +CONTROL_REDO which I will use in my next Blog Post, where I plan to share how to migrate from a file systems to Oracle ASM.

asmca
asmca

Oracle 18c Grid Infrastructure installation

Preparation

[root@z-oracle1 ~]# cd /u01/app/18.0.0/grid/cv/rpm
[root@z-oracle1 rpm]# ls
cvuqdisk-1.0.10-1.rpm

[root@z-oracle1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
Preparing… ################################# [100%]
Updating / installing…
1:cvuqdisk-1.0.10-1 ################################# [100%]

18c Grid Infrastructure installation

Confirm ‘grid’ owner groups.

[grid@z-oracle1 grid]$ id grid

uid=54331(grid) gid=1001(dba) groups=1001(dba),54321(oinstall),54333(asmdba),54334(asmoper),54335(asmadmin),54336(racdba)

Create 18c Grid Infrastructure Oracle Base, Home and admin directories e.g.

[grid@z-oracle1 ~]$mkdir -p /u01/app/18.0.0/grid
[grid@z-oracle1 ~]$mkdir /u01/app/grid

Launch installer from terminal capable of starting X session.

[grid@z-oracle1 grid]$ cd $GRID_HOME
[grid@z-oracle1 grid]$ ./gridSetup.sh

Select Clustered, non-clustered or software only installation

1
Step 1

Update the ASM disk discovery path (if required), and select disk path for your +DATA disk group.

Before I started the 18c installer I created 3 Pure Storage volumes and configured UDEV rules to help identification. I can now select my 1TB +DATA volume and set the Redundancy to ‘External’ as the storage platform will provide the required data protection.

2
Step 2

Provide ASM passwords and click ‘Next’

3
Step 3

Register with Oracle Enterprise Manager, I will skip this for now, ‘Next’

4
Step 4

Confirm Oracle ASM ‘grid’ account groups, click ‘Next’

5
Step 5

Confirm pre-created Oracle Base location.

6
Step 6

Select ‘Next’ and run root.sh in a separate terminal window when prompted

7
Step 7

Review 18c Grid Infrastructure summary and click ‘Install’ to confirm installation

9
Step 9

Run root.sh from root window.

root
root.sh

Wait for the script to start the 18c Grid Infrastructure services, once complete click ‘OK’.

11.png
Step 11

If all has gone well, you should be presented with a ‘successful’ installation screen.

Now, lets look at our first look at an 18c ASM instance, if you observant you will see I have created 2 additional ASM DiskGroups +FRA & + CONTROL_REDO.

Will check my previous Blogs and if missing will share how you can create ASM Diskgroups using ASMCLI or AMSCA (UI).

1800.png

Come back in a few days to see my first 18c database Blog post.

Adding new devices (VMware) to Linux without re-booting server

 

Background

Today, I found myself needing to add 3 new disks to one of my Oracle database server VMware VM’s, unfortunately I was unable to re-boot the server so I followed the steps below to add the new devices.

Create LUN’s using your storage platform and add add them to your Virtual Machine using your preferred  vSphere Client.

newDevices
VMware vSphere Web Client – 3 new devices highlighted

Use yum to install sg3_utils

[root@z-oracle1 ~]# yum install sg3_utils

Use the Linux ‘fdisk’ command to list current devices

[root@z-oracle1 dev]# fdisk -l

Rescan SCSI bus using the rescan-sci-bus.sh script.

[root@z-oracle1 dev]# rescan-scsi-bus.sh
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 0 0 0 0 …
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: NECVMWar Model: VMware IDE CDR00 Rev: 1.00
Type: CD-ROM ANSI SCSI revision: 05
Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 2 0 0 0 …
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: VMware Model: Virtual disk Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 02
Scanning for device 2 0 1 0 …

Use ‘fdisk’ to identify new device names

[root@z-oracle1 dev]# fdisk -l

Disk /dev/sdi: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

Disk /dev/sdj: 3298.5 GB, 3298534883328 bytes, 6442450944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

Disk /dev/sdk: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes

We should now be able to see the device in /dev.

[root@z-oracle1 dev]# ls -l /dev/sd*
brw-rw—-. 1 root disk 8, 0 May 14 14:14 /dev/sda
brw-rw—-. 1 root disk 8, 1 May 14 14:14 /dev/sda1
brw-rw—-. 1 root disk 8, 2 May 14 14:14 /dev/sda2
brw-rw—-. 1 root disk 8, 16 May 14 14:14 /dev/sdb
brw-rw—-. 1 root disk 8, 32 Jul 23 14:31 /dev/sdc
brw-rw—-. 1 root disk 8, 48 Jul 23 14:30 /dev/sdd
brw-rw—-. 1 root disk 8, 64 Jul 23 14:31 /dev/sde
brw-rw—-. 1 root disk 8, 80 Jul 9 21:02 /dev/sdf
brw-rw—-. 1 root disk 8, 96 Jul 9 21:02 /dev/sdg
brw-rw—-. 1 root disk 8, 112 Jul 9 21:02 /dev/sdh
brw-rw—-. 1 root disk 8, 128 Jul 24 10:46 /dev/sdi
brw-rw—-. 1 root disk 8, 144 Jul 24 10:46 /dev/sdj
brw-rw—-. 1 root disk 8, 160 Jul 24 10:46 /dev/sdk

We are now ready to partition our new disks using ‘fdisk’ (n = new p = print w = write).

[root@z-oracle1 dev]# fdisk /dev/sdi
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x1fea2f04.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (8192-2147483647, default 8192):
Using default value 8192
Last sector, +sectors or +size{K,M,G} (8192-2147483647, default 2147483647):
Using default value 2147483647
Partition 1 of type Linux and of size 1024 GiB is set

Command (m for help): p

Disk /dev/sdi: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0x1fea2f04

Device Boot Start End Blocks Id System
/dev/sdi1 8192 2147483647 1073737728 83 Linux
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

I then repeated for ‘/dev/sdj’ & ‘/dev/sdk’ and now ready to format the devices e.g. mkfs.ext4 /dev/sdi

Enterprise Manager Cloud Control 13c (13.3) deploy Plug-Ins

At the beginning of July Oracle released Enterprise Manager Cloud Control 13.3 so I thought it was a good time to see what has changed and also document how to import an OEM Plug-In.

To get started, I created a new Oracle 12.2 database and then installed OEM 13.3 following the steps produced by my good friend Tim Hall aka (  Oracle-Base )

Download Plug-Ins

A good place to get Oracle Enterprise Manager Plug-In’s is from the Oracle Extensibility Exchange, here you will find many OEM Plug-Ins which have been developed by Oracle Partners and up-loaded for use by the Oracle Community.

OEE

Now search for the required OEM Plug-In

Extensibiliy_Exhange.png

Once downloaded transfer the file to your OEM server ready for importing.

Import the Plug-In

Logon to Enterprise Manager from the command line:

[oracle@z-oem 13c]$ emcli login -username=sysman
Enter password 
Login successful

Use emcli to import the local opal file. e.g

[oracle@z-oem 13c]$ emcli import_update -file=”/u01/app/oracle/13c/13.2.0.1.0_vlss.pure.xvp1_2000_0.opar” -omslocal
Processing update: Plug-in – Enterprise Manager for Pure Storage Flash Array
Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.

Deploy Plug-In

From the main screen, select the Setup Cog, then select Extensibility and then Plug-ins.

plugins.png

From the Plug-Ins page, select the ‘Pure Storage FlashArray’ plug-in in the ‘Servers, Storage and Network’ section

fa_plugin1.png
Then select Deploy On and choose Management Servers.

fa_1.png

If the prerequisites checks pass you can continue with the Plug-Ins workflow.

Provide sys credentials for the OEM repository, and then click ‘Next’

FA_Repo

At the step 4 ‘Review’, hit ‘Deploy’ to initiate the deployment process.

FA_4.png

Now Click on Status to monitor progress of the deployment.

FA_5.png

If the Plug-In deploys successfully you will see the ‘Status’ change to a green tick.

FA_success.png

You are now ready to configure and use the newly installed OEM 13c Plug-In

Oracle Database Automation with Ansible AWX

In the video below I demonstrate how a simple Ansible playbook can be used to automate the cloning of multiple Oracle databases in parallel.

In the demo I use the AWX Web UI to run the playbook, the AWX project is an Open Source community project sponsored by RedHat, it can be considered the ‘bleed-in edge’ of RedHat Ansible Tower.

AWX is designed to be fast moving and is where the development takes place before being hardened and making it’s way into the enterprise ready RedHat supported Ansible Tower solution.

AWX is great from lab work, but for all Production workloads I would strongly recommend that Ansible Tower is considered.

Pure Code Developer Community

Visit the Pure Code Developer Community and click on the ‘Ansible Playbook Examples’ card to be directed to the public GitHub repository, you will find this and other Oracle playbook examples.

Getting started with Ansible and Windows

Background

I spend most of my time working with Linux but occasionally I come across Oracle on Windows, so I thought it’s about time that I tried my hand at using Ansible on Windows.

Ansible on Windows

You can not currently run Ansible on Windows, but you can manage Windows servers from a Linux or Mac Ansible control machine.

Ansible uses ‘SSH’ to manage Linux servers, on Windows Ansible requires ‘winRM’ Windows Remote Manager services. The Ansible docs details prerequisites, which include PowerShell v3 and .Net 4.0 and winRM, the steps to set-up winRM can be found here.

If you read my previous Blog Post Getting started with Ansible and Oracle you will have seen me use ‘ping’ to check connectivity to a Linux server, to do this in a Windows environment we use ‘win_ping’ module via the -m option.

win-ping
You can see that the Ansible ‘ping’ and ‘win_ping’ both return the familiar ‘pong’ message, indicating good connectivity.

Ok, that’s pretty cool, now let’s try running the Windows ‘whoami.exe’ using the ‘win_command’ module.

win-whoami
And a quick ‘Hello World’ test via PowerShell and the ‘win_shell’ module.

HelloWorld

Next steps, create some Windows playbooks and test, but that’s for another Blog.