Making my old HP OVM Server go a whole lot faster

I’m a massive believer in getting good use out of old hardware which although may be not great for production usage can make a fantastic piece of dev/test or training equipment especially when combined with Oracle VM.  We had a couple of HP Servers sitting around doing very little but after combining them into 1 I had a nice little server with lots of cores, lots of RAM and lots of disks.  Not cutting edge but good enough and more importantly the ability to run 6 x 2 node RAC clusters for internal training purposes.

Setting up OVM Server doesn’t take long at all and it recognised the RAID drive I build using the RAID controller in the server for the OVS repository with no issues at all.  So I went about creating some templates for the training I had setup with some of the team.  For this kind of environment where having lots of space is more important than performance I opt for RAID 5.  However, the performance was really, really slow.  Think taking about 2 hours to copy the oracle home from node 1 to node 2.  Even RAID 5 shouldn’t be this slow.

So, I found a working HP Array Configuration Utility in Version 9.40.12.0 which just installed into OVM Server with a simple ‘rpm -ivh’ and fired it up:

ctrl all show detail

Smart Array P410 in Slot 1
Bus Interface: PCI
Slot: 1
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: C
Firmware Version: 2.74
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 15 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60  min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 0 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 100% Read / 0% Write
Drive Write Cache: Disabled
Total Cache Size: 256 MB
Total Cache Memory Available: 144 MB
No-Battery Write Cache: Disabled
Battery/Capacitor Count: 0
SATA NCQ Supported: True

So I could see that all the cache was for Read, Drive Write Cache was disabled and No-battery Write Cache was also disabled (probably because I had not batteries installed).  Basically nothing there going to help my write speed in the slightest.  As it’s a server for test and training then I want as much performance as possible and as much storage as possible.  If I lose power and lose some data then to be honest I’m not really bothered.  My templates will still be there, static and intact and if the worst case scenario came to be then installing OVM Server again is not much of a hardship.

So, full steam ahead and off with the following commands to turn on all the caching possible.

ctrl slot=1 modify nbwc=enable
ctrl slot=1 modify dwc=enable forced
ctrl slot=1 modify cacheratio=25/75<

Which led to the following:

ctrl all show detail

Smart Array P410 in Slot 1
Bus Interface: PCI
Slot: 1
RAID 6 (ADG) Status: Disabled
Controller Status: OK
Hardware Revision: C
Firmware Version: 2.74
Rebuild Priority: Medium
Expand Priority: Medium
Surface Scan Delay: 15 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60  min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 0 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 25% Read / 75% Write
Drive Write Cache: Enabled
Total Cache Size: 256 MB
Total Cache Memory Available: 144 MB
No-Battery Write Cache: Enabled
Battery/Capacitor Count: 0
SATA NCQ Supported: True

The outcome, well, copying the DB home from node 1 to node 2 took a total of 9 minutes and not hours.  Quite a massive improvement and shows just how important caching is.  I also found a working version of the HP Array Configuration Utility for OVM Server and I now have a super fast training environment which came from combining a couple of old servers and a little bit of time.

Using VNC Viewer with Oracle VM 3

The new Console on OVM 3.3.1 is a bit faster and more reliable than the old one on 3.2.8 but if you want to access the console using a VNC viewer you can do the following:

xm list -l <vm id>

Look for the line that states:

            (location 127.0.0.1:5903)

Port 59XX will change depending on the VM and startup order, in the above it’s 5903 so I forward port 5903 to your client via SSH and hook up with VNC and have a console session.

Voting Disk Setup in a 12c Grid Infrastructure Extended RAC Environment

I was asked to configure a new 12c Grid Infrastructure environment for a customer recently. As part of this the Oracle Grid Infrastructure Management Repository (the CHM Repository) would be configured so the size of the ASM disk group would need to be much bigger than previously and I also wanted to find out how the Quorum based NFS voting disk would work in this new situation.

In summary, not much has changed apart from the size and the Quorum disk remains just that.

I started off with a Normal Redundancy ASM Disk Group (OCRVOTE) with 3 x 10 GB LUNs presented (OCRVOTE1, OCRVOTE2 and OCRVOTE3).

[root@ ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
– —– —————– ——— ———
1. ONLINE 898a1b4a12e14fd0bfac44781420a9e2 (ORCL:OCRVOTE1) [OCRVOTE]
2. ONLINE ba5a0ec486d54f7cbfe4274d359fd7fb (ORCL:OCRVOTE2) [OCRVOTE]
3. ONLINE 7356bfa7d7844ffabf16d1c5e1d70200 (ORCL:OCRVOTE3) [OCRVOTE]
Located 3 voting disk(s).

I created a 500 MB ‘disk’ on my NFS mount point:
$ dd if=/dev/zero of=/voting_disk/vote_quorum bs=1M count=500

I then launch the ASM Configuration Assistant (ASMCA):

Right click on OCRVOTE and select ‘Add Disks’. I had to change my Disk Discovery Path to be ‘ORCL:*,/voting_disk/*’ for the NFS disk to show up.


I select the disk and tick the Quorum Box and press OK.

A check shows that the 3 voting disks are still in the same place so we need to remove the temporary 3rd Voting File and replace it with the NFS based disk.

[root@ voting_disk]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
– —– —————– ——— ———
1. ONLINE 898a1b4a12e14fd0bfac44781420a9e2 (ORCL:OCRVOTE1) [OCRVOTE]
2. ONLINE ba5a0ec486d54f7cbfe4274d359fd7fb (ORCL:OCRVOTE2) [OCRVOTE]
3. ONLINE 7a1a99a6743d4f2abf1a061ee764bdb9 (ORCL:OCRVOTE3) [OCRVOTE]
Located 3 voting disk(s).

Back into ASMCA and this time, right click on the OCRVOTE disk group and select ‘Drop Disks’


Select the ORCL:OCRVOTE3 disk and click OK.


If we take a quick look at the new configuration in ASMCMD we can see a rebalance is currently in place.
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL Y 512 4096 1048576 20972 13501 10236 1400 0 Y OCRVOTE/

[root@ voting_disk]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
– —– —————– ——— ———
1. ONLINE 898a1b4a12e14fd0bfac44781420a9e2 (ORCL:OCRVOTE1) [OCRVOTE]
2. ONLINE ba5a0ec486d54f7cbfe4274d359fd7fb (ORCL:OCRVOTE2) [OCRVOTE]
3. ONLINE efd0792521db4fedbf04ff59886fad17 (/voting_disk/vote_quorum) [OCRVOTE]
Located 3 voting disk(s).

We have relocated our 3rd voting disk to our 3rd site on NFS.

The main thing here is that the voting disks need to be at least 10GB for an extended cluster layout with a 3rd quorum NFS disk.

Oracle OpenWorld 2013 – Larry Ellison Keynote 22nd September 2013 – The Oracle Database Backup, Logging, Recovery Appliance

Designed for databases

A first backup resides on the backup appliance and then logs are shipped to the backup appliance (deltas only to minimize network load).

Can go back to any point in time.

Available as a cloud service on the Oracle Public Cloud.

Backup Appliances can replicate to other Backup Appliances (cloud or not).

Oracle OpenWorld 2013 – Larry Ellison Keynote 22nd September 2013 – The M6-32 Big Memory Machine

The 2nd product announced was a machine for In-Memory Database (Big Memory Machines), the following numbers impressed be greatly:

32 TB DRAM (1024 Memory DIMMS)

32 SPARC M6 Chips (12 cores per processor and 96 threads per processor)

3TB per second bandwidth (that’s the size of my desktop hard disk – imagine how long that takes to read from start to finish)

The list process was shown as $3 million (I’m not as salesman so don’t really get too worked up about costs).

The M6-32 is also available as a SuperCluster by attaching it to the Exadata Storage Cells over InfiniBand.  This brings the benefits of the Exadata I/O Subsystem to the Big Memory Machine.

 

Oracle OpenWorld 2013 – Larry Ellison Keynote 22nd September 2013 – The In-Memory Database

The first topic spoken about was the ‘In Memory Option for 12c Database’.

A summary of the main points are:

  • Get 100x faster queries and insert database twice as fast
  • Works on OLTP and DW systems by having Oracle 12c store data in a row and column format simultaneously (Dual Format In-Memory Database)
  • In Memory Columnar store is processed with no logging (the row store is logged). This means a near zero overhead on changes.
  • Data loaded into memory upon startup or first access (similar in a way I suppose to Exadata Storage Indexes in concept IMO).
  • Each CPU core can scan billions of rows per second.
  • Works with joins as well for 10x faster performance
  • For complex reports up to 20x faster can be achieved by in-memory technology.
  • Indexes for analytic functions can be replaced by the in-memory column store. This speeds up OLTP as there are less indexes to maintain. Also less DBA tuning and administration required.
  • Works with all applications unchanged and because it is all in-memory then all the disk based objects remain as they are currently.
  • Obviously, whole databases can be stored in memory completely if you have enough RAM at the moment.  The extra columnar formats enabled by this new feature enable it to run the queries even faster.
  • Can be scaled out using RAC to any size, in-memory queries can be parallelized across servers to access local column data.  Optimized for Exadata with a new ‘Direct-to-wire’ InfiniBand protocol.
  • Can be scaled up with servers that have large amount of CPUs and/or Cores.

To enable the In-Memory Column Store:

  1. Configure how much memory it can use: inmemory_size = XXX GB
  2. Configure the objects to be in memory: ALTER TABLE | PARTITION … INMEMORY;
  3. Drop analytic indexes that are no longer required which will improve OLTP performance.

Creating an Oracle 12c RAC Cluster – PART 1 OS and GI

This guide shows how to create a test RAC cluster. It doesn’t necessarily cover all the steps but it does enough to give you a working environment.

I create 2 Oracle Linux 5 Update 9 64 bit Virtual Machines with 3072MB RAM.

Each nodes has 2 NIC’s – 1 Public and 1 Private – DNS is not being used so the following entries are used in the /etc/hosts file:

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

192.168.0.121 lin-rac-2a.local lin-rac-2a

192.168.0.122 lin-rac-2b.local lin-rac-2b

192.168.0.123 lin-rac-2a-vip.local lin-rac-2a-vip

192.168.0.124 lin-rac-2b-vip.local lin-rac-2b-vip

192.168.0.129 scan-2.local scan-2

10.10.0.121 lin-rac-2a-priv

10.10.0.122 lin-rac-2b-priv

I ensure the asmlib, oracleasm and oracleasm-support RPMs are installed as well.

Storage consists of :

  • 1 x 2GB Virtual Disk for the OCR and VOTE disks.
  • 1 x 40GB Virtual Disk for the DATA ASM Disk Group.
  • 1 x 20GB Virtual Disk for the FRA ASM Disk Group.

If you are using Oracle VM then remember to tick the Shareable option when creating the virtual disk.

A ‘fdisk –l’ shows the new disks:

Disk /dev/xvdb: 2147 MB, 2147483648 bytes

255 heads, 63 sectors/track, 261 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/xvdb doesn’t contain a valid partition table

 

Disk /dev/xvdc: 42.9 GB, 42949672960 bytes

255 heads, 63 sectors/track, 5221 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/xvdc doesn’t contain a valid partition table

 

Disk /dev/xvdd: 21.4 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/xvdd doesn’t contain a valid partition table

 

So, create a partition on them as such:

[root@lin-rac-2a network-scripts]# fdisk /dev/xvdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won’t be recoverable.

 

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

 

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-261, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):

Using default value 261

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@lin-rac-2a network-scripts]# fdisk /dev/xvdc

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won’t be recoverable.

 

 

The number of cylinders for this disk is set to 5221.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

 

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-5221, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-5221, default 5221):

Using default value 5221

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@lin-rac-2a network-scripts]# fdisk /dev/xvdd

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won’t be recoverable.

 

 

The number of cylinders for this disk is set to 2610.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

 

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-2610, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610):

Using default value 2610

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

Role separation has it’s place but for my test environment I am configure GI and DB to run under the Oracle user so check your oracle user is something like this:

[root@lin-rac-2a ~]# id oracle

uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(asmadmin),54324(asmdba),54325(asmoper)

 

[root@lin-rac-2a ~]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is

loaded on boot and what permissions it will have. The current values

will be shown in brackets (‘[]’). Hitting <ENTER> without typing an

answer will keep that current value. Ctrl-C will abort.

 

Default user to own the driver interface []: oracle

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [ OK ]

Scanning the system for Oracle ASMLib disks: [ OK ]

And now let’s label them using oracleasm

[root@lin-rac-2a ~]# oracleasm createdisk OCRVOTE1 ‘/dev/xvdb1′

Writing disk header: done

Instantiating disk: done

[root@lin-rac-2a ~]# oracleasm createdisk DATA1 ‘/dev/xvdc1′

Writing disk header: done

Instantiating disk: done

[root@lin-rac-2a ~]# oracleasm createdisk FRA1 ‘/dev/xvdd1′

Writing disk header: done

Instantiating disk: done

[root@lin-rac-2a ~]# oracleasm listdisks

DATA1

FRA1

OCRVOTE1

On node 2:

[root@lin-rac-2b ~]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver. The following questions will determine whether the driver is

loaded on boot and what permissions it will have. The current values

will be shown in brackets (‘[]’). Hitting <ENTER> without typing an

answer will keep that current value. Ctrl-C will abort.

 

Default user to own the driver interface []: oracle

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [ OK ]

Scanning the system for Oracle ASMLib disks: [ OK ]

[root@lin-rac-2b ~]# /etc/init.d/oracleasm listdisks

DATA1

FRA1

OCRVOTE1

So, the 2nd node can see the marked shared storage so at this point we can start the installation of 12c GI.

./runInstaller

Run the scripts – once complete press OK.

If we look at the log file we can see because it could not resolve the scan address using DNS which is fine as for the test environment we are using host file entries so this error can be ignored.

INFO: ERROR:

INFO: PRVG-1101 : SCAN name “scan-2.local” failed to resolve

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for “scan-2.local” (IP address: 192.168.0.129) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name “scan-2.local”

INFO: Checking SCAN IP addresses…

INFO: Check of SCAN IP addresses passed

INFO: Verification of SCAN VIP and Listener setup failed