Salisbury and Asquith, my ‘frameworks’ for automated, nearly-hands-free building of Oracle servers, are retiring. Which is to say, I’m not going to maintain them any more.

My attempts over the years to persuade my System Admin colleagues at work that RAC via NFS (as Salisbury uses) might be a good idea have all fallen on deaf ears, Kevin Closson’s fine articles on the subject notwithstanding. So Salisbury became a bit of a dead end after that, which is why I cooked up Asquith. Asquith uses real iSCSI (as real as anything a virtual environment can cook up, anyway!) and layers ASM on top of that and thus provided me with a playground that much more faithfully reflects what we do in our production environment.

But it’s a pain having two frameworks doing pretty much the same job. So now I’m phasing them out and replacing them with Churchill. The Churchill framework uses NFS (because it’s much easier to automate the configuration of that than it is of iSCSI), but it then creates fake hard disks in the NFS shares and layers ASM on top of the fake hard disks. So you end up with a RAC that uses ASM, but without the convoluted configuration previously needed.

The other thing we do in production at work is split the ownership of the Grid Infrastructure and the Database (I don’t know why they decided to do that: it was before my time. The thing is all administered by one person -me!- anyway, so the split ownership is just an annoyance as far as I’m concerned). Since I’ve been bitten on at least one significant occasion where mono- and dual-ownership models do things differently, I thought I might as well make Churchill dual-ownership aware. You don’t have to do it that way: Churchill will let you build a RAC with everything owned by ‘oracle’ if you like. But it does it by default, so you end up with ‘grid’ and ‘oracle’ users owning different bits of the cluster, unless you demand otherwise.

Other minor changes from Asquith/Salisbury: Churchill doesn’t do Oracle installations, since that version’s now well past support. You can build Churchill infrastructures with, and Of those, only the last is freely available from

Additionally, the bootstrap lines have changed a little. You now invoke Alpher/Bethe installations by a reference to “ks.php” instead of “kickstart.php” (I don’t like typing much!). And there’s a new bootstrap parameter: “split=y” or “split=n”. That turns on or off the split ownership model I mentioned earlier. By default, “split” will be “y”.

Finally, I’ve made the whole thing about 5 times smaller than before by the simple expedient of removing the Webmin web-based system administration tool from the ISO download. I thought it was a good idea at the time to include it for Asquith and Salisbury but, in fact, I’ve never subsequently used it and it made the framework ISO downloads much bigger than they needed to be. Cost/benefit wasn’t difficult to do: Webmin is gone (you can always obtain it yourself and add it to your servers by hand, of course).

The thing works and is ready for download right now. However, it will take me quite some time to write up the various articles and so on, so bear with me on that score. All the documentation, as it gets written, will be accessible from here.

The short version, though, is you can build a 2-node RAC and a 2-node Active data guard with six basic commands:

  • ks=hd:sr1/churchill.ks (to build the Churchill Server)
  • ks= oraver=11203 ksdevice=eth0 (to build Alpher)
  • ks= oraver=11203 filecopy=n ksdevice=eth0 (to build Bethe)
  • ks= (to build Atlee, the file server for the Data Guard nodes)
  • ks= oraver=11203 ksdevice=eth0 (to build Gammow)
  • ks= oraver=11203 filecopy=n ksdevice=eth0 (to build Dalton)

With Churchill and the rest of the crew, I can now build a pretty faithful replica of my production environment in around 2 hours. Not bad.

Salisbury and Asquith will remain available from the front page until the Churchill documentation is complete; after that, they’ll disappear from the front page but remain available for download from the Downloads page, should anyone still want them.

A Miscellany

It’s been one of those periods of ‘nothing remakarkable ever happens’. So, in desperation, I decided to try to find a blog post or two in the unremarkable instead.

Let’s start with my little HP Folio 13, my near-two-year-old notebook, of which I said in a recent blog piece, “the Folio only has 4GB RAM, so running multiple simultaneous VMs is not really an option: this Oracle will have to run on the physical machine or not at all”

Absolutely accurate as it stands, in that the thing does indeed ship with only 128GB hard disk and 4GB RAM, which is not enough to hold a decent party, let alone run a decent database.

However, I had reckoned without these guys. Their web site tools found me this:

It’s a 250GB mSATA hard drive (mSATA essentially being the innards of an ordinary solid state hard drive without the fancy external casing). At a stroke, and for relatively modest outlay, I was able to double my disk capacity and its speed. Virtualisation on such a storage platform becomes distinctly do-able.

My second purchase was this:

For a mere AU$100, that 8GB stick of laptop RAM doubles the laptop’s existing capacity -and, again at a stroke, makes it more than capable of hosting a 3-machine Oracle RAC.

Fitting these goodies was not a piece of cake, I have to say, what we me being blessed with fingers that are as dainty as a french Boulangerie’s Baguette-Rex. For the most part, I followed the instructions provided by this kind Internet soul without incident, though I still managed to rip out the connector ribbons that make minor details like the keyboard and monitor work in my heavy-handed case opening attempts. I’m pleased to report, however, that the relevant connectors appear to have been designed with complete Klutzes in mind, so I was able to reconnect them when required and the laptop is now operating normally once more.

So now I am blessed with a 16GB, 1.5TB SSHD monster of a Toshiba laptop for running anything serious (for example, a 2-node RAC and 2-node Data Guard setup, practicing for patches, failovers and switchovers). It is technically portable, and so I can brace my neck and arms and lug into work on the train if I have to.

But with the peanut-sized hardware upgrades mentioned here, however clumsily fitted by yours truly, I am now additionally blessed with an 8GB, 250GB SSHD svelte, bare-noticeable HP ultrabook that I can carry around for hours and not mind… and it’s good enough to run a Windows virtual machine with SQL Server and a 2-node Oracle RAC, so practicsing patching, SQL Server→Oracle replication and such database-y things is trivially easy, without breaking my neck or upper arms.

It’s nice to have rescued a near-two-year-old ultrabook from oblivion, too, because with the additional hardware has not only extended the original machine’s technical capacity, it’s just about doubled its useful lifetime, too.

Flushed with my new hardware capabilities, then, I recently decided to dry-rehearse the update of an Oracle RAC to (i.e., by applying the January 2014 CPU patchset to it, which for Grid+RAC purposes is patch 17735354). It didn’t go awfully well, to be honest -and the reason it didn’t go very well was instructive!

The basic process of applying a Grid+RAC patch to a node is:

  1. Copy the patchfile to an empty directory owned by the oracle user (I used /home/oracle/patches), and unzip it there
  2. Make sure the /u01/app/grid/OPatch and /u01/app/oracle/product/11.2.0/db_1/OPatch directories on all nodes are wiped and replaced with the latest unzipped p6880880 download (that gets your patching binaries right)
  3. Create an ‘ocm response file’ by issuing the command /u01/app/grid/OPatch/ocm/bin/emocmrsp -no_banner -output /home/oracle/ocm.rsp (on all nodes)
  4. Become the root user, set your PATH to include /u01/app/grid/OPatch and then launch opatch auto /home/oracle/patches -ocmrf /home/oracle/ocm.rsp

After you launch the patch application utility at Step 4, it’s all supposed to be smooth sailing. Unfortunately, whenever I did this on Gamow (the primary node of my standby site and thus the first site to be patched in a ‘standby first’ scenario), I got this result:

2014-02-17 12:56:45: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params

Stopping RAC /u01/app/oracle/product/11.2.0/db_1 ...
Stopped RAC /u01/app/oracle/product/11.2.0/db_1 successfully

patch /home/oracle/patches/17592127/custom/server/17592127  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
patch /home/oracle/patches/17540582  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 

Stopping CRS...
Stopped CRS successfully

patch /home/oracle/patches/17592127  apply failed  for home  /u01/app/grid

Starting CRS...
CRS-4123: Oracle High Availability Services has been started.
Failed to patch QoS users.

Starting RAC /u01/app/oracle/product/11.2.0/db_1 ...
Started RAC /u01/app/oracle/product/11.2.0/db_1 successfully

opatch auto succeeded.

If you read it fast enough, you might just glance at the last line there and think everything is tickety-boo: “opatch auto succeeded”, after all! You might even scan through some of the lines shown getting to that point which say happy things like, “17592127 apply successful for home /u01/app/oracle/product/11.2.0/db_1” and conclude that all’s well. But a keener eye is needed to notice that *one* line says “17592127 apply failed for home /u01/app/grid” and another mentions something about having “Failed to patch QoS users” . So what’s going on: is opatch being successful or not?

The answer lies in the log file which it tells you it’s created. Mine had this sort of stuff in it:

2014-02-17 13:06:51: Successfully removed file: /tmp/fileS5bCZV
2014-02-17 13:06:51: /bin/su exited with rc=1

2014-02-17 13:06:51: Error encountered in the command /u01/app/grid/bin/qosctl -autogenerate
>  Syntax Error: Invalid usage
>  Usage: qosctl <username> <command>
>    General
>      username - JAZN authenticated user. The users password will always be prompted for.
>    Command are:
>      -adduser <username> <password> |
>      -checkpasswd <username> <password> |
>      -listusers |
>      -listqosusers |
>      -remuser <username> |
>      -setpasswd <username> <old_password> <new_password> |
>      -help 
>  End Command output
2014-02-17 13:06:51: Running as user oracle: /u01/app/grid/bin/crsctl start resource ora.oc4j
2014-02-17 13:06:51: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/bin/crsctl start resource ora.oc4j '
2014-02-17 13:07:06: Removing file /tmp/file102UrG
2014-02-17 13:07:06: Successfully removed file: /tmp/file102UrG
2014-02-17 13:07:06: /bin/su successfully executed

Again, that last line shows opatch has a nasty habit of declaring success at the drop of a hat! It may distract you from seeing that there’s been a syntactical problem: the patch tool was trying to execute qosctl -autogenerate and encountered a syntax error instead. Clearly, the qosctl program didn’t like “autogenerate” as a command switch. Perhaps at this point you think, “Another fine Oracle stuff-up, but as I don’t use Quality of Service features anyway, this won’t be of significance to me”.

Unfortunately, it will -because the syntax error here is not really what you’re supposed to be looking at. The syntax error is the clue: this autogenerate command would be syntactically correct if the qosctl binaries had been patched to (because the autogenerate switch was introduced somewhere around So it can only be a syntactical error if the binaries haven’t been patched successfully. And if this particular qosctl binary wasn’t patched, there’s a very good chance that some other binaries that you do make use of will have been skipped too.

But to see evidence for whether that’s a problem or not, you have to look upwards in the patching log, and keep a sharp eye out for this:

2014-02-17 13:05:22: The apply patch output is Oracle Interim Patch Installer version
 Copyright (c) 2013, Oracle Corporation.  All rights reserved.

 Oracle Home       : /u01/app/grid
 Central Inventory : /u01/app/oraInventory
    from           : /u01/app/grid/oraInst.loc
 OPatch version    :
 OUI version       :
 Log file location : /u01/app/grid/cfgtoollogs/opatch/opatch2014-02-17_13-05-18PM_1.log

 Verifying environment and performing prerequisite checks...
 Prerequisite check "CheckSystemSpace" failed.
 The details are:
 Required amount of space(6601.28MB) is not available.
 UtilSession failed:
 Prerequisite check "CheckSystemSpace" failed.
 Log file location: /u01/app/grid/cfgtoollogs/opatch/opatch2014-02-17_13-05-18PM_1.log

 OPatch failed with error code 73

2014-02-17 13:05:22: patch /home/oracle/patches/17592127  apply failed  for home  /u01/app/grid

So this comes from about 1 minute before the qosctl syntax error report… and is clearly the source of the original ‘failed to apply’ error that was displayed as part of opatch’s screen output. And the cause for that error is now apparent: the patch failed because a ‘CheckSystemSpace’ prerequisite failed. Or, in plain English, I haven’t got enough free disk space to apply this patch.

If you’re like me, that will surprise you. My file system has a reasonable amount of free space, after all:

[[email protected] db_1]$ df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/sda2           21G   15G  5.3G  74% /
tmpfs              1.9G  444M  1.5G  24% /dev/shm
balfour:/griddata   63G  3.1G   57G   6% /gdata
balfour:/dbdata     63G  3.1G   57G   6% /ddata

5.3GB of free space is not exactly generous, but it’s non-trivial, too… and yet it seems not to be enough for this patch to feel comfortable.

Anyway, to cut a long story short(er): never just focus on the bleeding obvious errors reported by OPatch. Dig deeper, look harder …you’ll probably find something which explains that the obscure-stated “failed to patch QoS users” is actually just a plea for more disk space.

I’ll wrap this blog piece up to say that I deliberately create my RAC nodes with only 25GB hard disks (it says so in the instructions!). I wondered after this experience whether I’d need to modify my Salisbury and Asquith articles to specify a larger hard disk size than that…. but actually, it turns out not to be necessary. Instead, make sure you delete the contents of the /osource directory before you start patching (that means wiping out the biinaries needed for installing Oracle and Grid… by now, you need neither, of course). If you do this, therefore:

[[email protected] osource]$ cd grid
[[email protected] grid]$ rm -rf *
[[email protected] grid]$ cd ..
[[email protected] osource]$ cd database
[[email protected] database]$ rm -rf *
[[email protected] database]$ df -h
Filesystem         Size  Used Avail Use% Mounted on
/dev/sda2           21G   12G  8.2G  59% /
tmpfs              1.9G  444M  1.5G  24% /dev/shm
balfour:/griddata   63G  3.1G   57G   6% /gdata
balfour:/dbdata     63G  3.1G   57G   6% /ddata

…then I can promise you that 8.2GB of free space is adequate and the PSU will be applied without error, second time of asking.

Of course, you may prefer simply to increase the size of the hard disk you’re working on so that there’s loads of free space, regardless of whether you delete things or not. That’s the approach I first took, too… and I ran into all sorts of problems when I tried it. But that’s a story for another blog piece, I think!

Asquith and the new Red Hat

Whilst I was busy planning my Paris perambulations, Red Hat went and released version 6.5 of their Enterprise Server distro. Oracle swiftly followed …and, even more remarkably, CentOS managed to be almost equally as swift, releasing their 6.5 version on December 1st. Scientific Linux has not yet joined this particular party, but I assume it won’t be long before they do.

I had also assumed Asquith would work unchanged with the new distro -but I hadn’t banked on the clumsy way I originally determined the distro version number which actually meant it all fell into a nasty heap of broken. Happily, it only took a minute or so to work out which bit of my crumbly code was responsible and that’s now been fixed.

Asquith therefore has been bumped to a new 1.07 release, making it entirely compatible with any 6.5 Red Hat-a-like distro (and any future 6.x releases, come to that).

Another feature of this release is that the ‘speedkeys’ parameters have been altered so that they assume the use of version 6.5 of the relevant distro. That is, if you build your RAC nodes by using a bootstrap line that reads something like ks=…, then you’ll be assumed to be using a 6.5 distro and the source OS for the new server will be assumed to reside in a <distro>/65 directory.

If you want to continue using 6.4 or 6.3 versions, of course, you can still spell that out (ks=…). You just can’t use speedkeys to do it.

An equivalent update to Salisbury has also just been released.

Oracle 12c and NFS

Here’s a little something that can trip you up if you’re not expecting it. The standard NFS exports options (the ones used by, for example, Salisbury) expect to handle I/O requests on ports lower than 1024. However, the new Oracle 12c defaults to using Direct NFS -which uses ports above 1024.

The result is that if you are using the Oracle Universal Installer to create a starter database on an NFS mount, by default, the thing will fail with nasty-looking ORA-17500 and ORA-17503 errors (the message text will suggest that it’s not able to open various files).

Happily, the fix is to add the insecure option to the end of your various export options in the /etc/exports file on the NFS server itself. The next release of Salisbury will be doing this automatically. Secure NFS is obviously there for a reason, but when it trips up your laboratory-only 12c RAC installs, “insecuring” it is the kindest option!

Salisbury Fun and Games

Salisbury isn’t particularly clever in the way that it manages to combine an Oracle installation with an Operating System installation: the “magic” is in these few lines of code:

echo "#!/bin/bash" > /home/oracle/
echo "/osource/database/runInstaller -waitforcompletion -ignoreSysPrereqs -ignorePrereq -responseFile /osource/standalonedb.rsp" >> /home/oracle/
chmod 775 /home/oracle/

su oracle -c "/home/oracle/"

That’s to say, it creates a little shell script that, when called, runs Oracle’s runInstaller with a bunch of switches. And then it calls it. Not exactly difficult.

Except that it doesn’t work for 12c.

It starts well enough, but then just stops working, for no apparent reason:

Starting Oracle Universal Installer...Checking Temp space: must be greater than 500 MB.   Actual 13296 MB    
PassedChecking swap space: must be greater than 150 MB.   Actual 4095 MB    
PassedPreparing to launch Oracle Universal Installer 
from /tmp/OraInstall2013-07-07_08-32-58AM. Please wait ...
[[email protected] ~]$

…and that’s the last that’s ever heard from it, for it seemingly just dies shortly afterwards.

If you let the O/S installation finish and then execute exactly the same shell script, though, it works perfectly. So it’s clearly not a syntactical thing: the same commands work post-O/S installation but fail during it. My best guess is that it’s a runlevel thing. Database Configuration Assistant (dbca) has long complained about needing to be in a certain runlevel before it can work; now it seems that the OUI feels the same way, though 11g’s OUI never did

Anyway, as a result of this change in behaviour by Oracle’s software, I’ve had to rejig Salisbury quite a bit so that it doesn’t try launching the Oracle installation during the O/S install. Instead, it merely creates a set of scripts -which are then executed on first reboot.The O/S installation phase takes a lot less time than before, of course; the time taken to complete the first reboot commensurately shoots through the roof! But at least it all works, for both 11g and 12c.

So now, as a result of this rejigging, you can press ESC during the first reboot and see this sort of thing:

You will still need to manually invoke the shell script (as the oracle user) to have a single-instance database created post-install, however.

So, it’s all working as I’d expected, but I have now to test on all the other distros, make sure I haven’t accidentally broken anything … and that it also still works for creating RAC+Data Guard setups. I’ll have the 12c-enabled version of Salisbury uploaded just as soon as all that testing is completed.. Watch this space…

No more disk re-initialization

It has long bugged me that my Kickstart scripts will quite happily build an entire virtual machine without you having to lift a finger …but not if it’s a virtual machine that’s using brand new virtual hard disks. If you’re installing onto virgin hard disks, you’ll likely get prompted with something like this:

Today, it annoyed me enough that I actually decided to do something about it. The fix turns out to be a simple one-word addition to your Kickstart script: zerombr. Stick that above the ‘clearpart’ line which actually partitions your hard drive, and it will have the effect of auto-initializing any drive it needs to.

In Salisbury Kickstart files, for example, you’ll currently find this code:

clearpart --all
part / --fstype=ext4 --size 20000 --grow
part swap --size 1024

…which means “clear all partitions, then create a root partition of at least 20GB, and a swap partition of 1GB”. This works fine unless there are no readable partitions to clear (such as when your disk has never previously been used). So the new code will read:

clearpart --all
part / --fstype=ext4 --size 20000 --grow
part swap --size 1024

…and that means your Salisbury servers can now be built truly and completely without manual intervention, after the first bootstrap line has been typed.

The code change hasn’t made its way into the Salisbury ISO as yet: there are a couple of other changes I’ve wanted to make to be wrapped up first. But it will be there soon.

It would be a shame if something happened to it…

I have finally gotten around to documenting the Salisbury approach to building an Active Data Guard set-up (that is, 2-node RAC replicating to a 2-node RAC, with the standby in open read-only mode), thereby protecting your data from anything that might unfortunately befall your production RAC.

The article is here.

The article concludes with ARCH doing the log shipping, which isn’t actually the best way of going about things, though it does achieve a high-availability objective. I’ll follow up shortly with altering protection modes and configuring data guard broker… but the article was so long as it stands that I felt compelled to relegate those subjects to follow-up articles rather than the main billing itself.

Keen eyes will note that the screenshots in the latest article are distinctly different from those in the build-a-2-node-RAC one: it’s what happens when Fedora is wiped from your laptop and Windows 8 replaces it part-way through!

Hyper-V Hundone

Oh well, that’s that then. Had to remove Hyper-V from my desktop today, because it seems incapable of running a 4-node RAC without crashing one or more nodes at random. Ah, you say: that could just be because Oracle software is flakey and RAC is as stable as a pile of teflon-coated jellies at the best of times. To which I retort merely that the 4 nodes seem to have no problem staying up and working as expected on exactly the same PC when they are run as VMware Workstation VMs.

Bit of a shame: I had hoped to be a fan of Hyper-V. But I need to finish off my 2-node RAC + 2-node Standby before the mid-year Solstice and sticking with Hyper-V isn’t going to get me there. I wish I could provide eloquent diagnostics and explanations. But stuff it: uninstalling it just seems a whole lot simpler to me.

Salisbury, plain

Way back in October last year, I announced that I wouldn’t be developing my Gladstone pre-installation script for Oracle any further, although the script itself would remain available (and it still is).

Back then, I promised a “son of Gladstone” replacement, “soon enough”. Little did I think it would take me six months to honour that promise! Such is life, I fear…

But Gladstone’s successor is now here… and, in keeping with (near) historical fact, that successor is to be called “Salisbury”. (That’s him on the right, looking suitably Victorian and bushy-bearded).

So, what exactly is Salisbury and how does he work?

Well, it’s a slight extension of the work I’ve been documenting in the previous dozen or so posts here: the idea of using Kickstart to automate the construction, the correct configuration and the Oracle software installation of Oracle servers. Additionally, it’s the use of a Kickstart-built server to supply all necessary network and shared-storage capabilities that Oracle Servers might need -especially if they run RAC.

In terms of tangible ‘product’, Salisbury actually consists of a single ISO download, just 27MB in size.

You use that ISO to kickstart the building of a Salisbury Server -a small server running RCSL that I’ve referred to in previous weeks as a ‘Linux toolbox’. Once your Salisbury Server is up and running, you use it to build your Oracle Servers. Those Oracle servers can run Oracle versions or in standalone or RAC mode. If you choose a standalone build, the Salisbury Server will automatically install the Oracle software for you, and create a simple shell script that will create a database when run post-install. If you instead choose to create a RAC-capable server, Salisbury will copy across all necessary software (and get users, groups, kernel parameters and so on correctly configured), but it won’t attempt to install anything automatically (because working out whether all the component parts of a cluster are up and running is a bit tricky!)

I present Salisbury here as, more or less, a fait accompli -but how it works and why are all things I’ve discussed in considerable detail over recent weeks, so if you’ve been following along, there shouldn’t be too many surprises (and if you haven’t, you can always step back to this post, which started it all, and read forward from there). I will try to pull it all together into a single, long article before long, though.

Building a Salisbury Server

The quick version of getting the Salisbury Infrastructure™ to work for you is this:

  • Build a new server with at least 512MB RAM and 60GB free hard disk space. Ensure it has two DVD drives.
  • Load your distro-of-choice’s full installation disk into the primary drive
  • Load the Salisbury ISO into the secondary drive
  • Boot your server and hit <TAB> at the boot menu. Make your build process Kickstart-defined by adding ks=hd:sr1/salisbury.ks to the bootstrap line.
  • Sit back and let the installation process complete.

Your new Salisbury Server will have only a root user account (password: dizwell). You can change that password with the passwd command, of course.

The Salisbury Server will automatically be a web server, complete with all sorts of useful files and packages which it can distribute to client Oracle Servers. However, the really important stuff is Oracle’s database server software -and, much as I’d like to, licensing restrictions mean I can’t provide that for you. Instead, you have to source that yourself, either from OTN (for free, but only at version, or from edelivery (if you have a proper, paid-for subscription and want to download version or better).

However you source it, you should obtain both database zips AND the grid zip and copy all three files to the /var/www/html directory of your Salisbury Server (FTP transfer with Filezilla or a similar tool is probably the easiest way of doing that).

In their wisdom, Oracle Corporation saw fit to name their software multiple different ways, depending on how you sourced it and what version you’re dealing with. This is a recipe for Salisbury Confusion™ -but it’s easily avoided by renaming whatever you download in a consistent way, as follows:


Replace the “x” in those names to reflect the actual version in use, of course. There is no flexibility about this: the Oracle software components must end up being named in this way if the Salisbury Server is to be of any future use to you in building Oracle Servers.

By renaming files in this way, it’s perfectly possible to have one Salisbury Server be able to create both versions of Oracle database: just download all 6 files (the 11201 three and the 11203 equivalents), and rename them all according to the above-mentioned scheme. When both versions are possibilities, you’ll be able to specify which one to use for any particular Oracle Server at build time, as I’ll explain shortly.

So, after building your Salisbury Server, you just have to copy Oracle software to it (and rename it as appropriate), just once. After that, it’s ready for duty.

Note that the Salisbury Server build involves copying its own installation media to disk. If you build your Server using OEL 6.3, for example, then a /var/www/html/oel/63 directory will be created and populated on it automatically. Such a server can then only help build other OEL 6.3 servers. If you want to be able to build CentOS or Scientific Linux Oracle Servers, maybe mixing up versions 6.3 and 6.4 as the mood takes you, you can do that provided you create /var/www/html/centos/63, /var/www/html/sl/64 and similar directories yourself. The directory names have to be of the form /centos, /sl or /oel and the version directories have to be either /63 or /64. After creating any additional directories in this way, you can then simply copy over the contents of the full install media for that distro/version combination. Make sure you use the full installation media, not the “Live CD” versions. There is, however, no need to copy the second DVD into the directories where one is available: disk 1 will suffice.

Build an Oracle Server with Salisbury

Once your Salisbury Server is up and running, you can use it to construct new Oracle servers. An Oracle Server must have at least 2GB RAM and 25GB of free hard disk space.

You boot a new Oracle Server with the netinstall boot disk of your distro of choice. At the boot menu, you invoke the Kickstart process by pressing <TAB> and then adding a ks=<URL> string to the bootstrap line. That <URL> element will be formed from the following elements:

  • Salisbury Server’s IP address
  • kickstart.php filename
  • five possible URL variables
  • Two possible Kickstart parameters

It is assumed that your Salisbury Server has an IP address of (if not, you’ll have to edit various files on the Salisbury Server itself).

The Kickstart filename is simply kickstart.php

The seven possible URL variables are:

  • distro (one of either centos, sl or oel)
  • version (one of either 63 or 64)
  • hostname (pretty much anything you like, so long as it’s a valid host name)
  • domain (pretty much anything you like, so long as it works as a domain name)
  • rac (one of either y or n, depending on whether you expect to be running a RAC or standalone database on the finished server)
  • ip (the IP address of the server, in a.b.c.d form)
  • ic (the IP address of the cluster interconnect, in a.b.c.d form, assuming one exists)

The two possible Kickstart parameters are:

  • ksdevice=<name of network interface to use initially, if there are 2 or more network cards present, such as eth0 or eth1>
  • oraver=<11201 or 11203, depending on which version of the Oracle software you want to use; can also be set to none to mean ‘don’t copy any Oracle software at all’… useful for second and subsequent nodes of a cluster>

You must supply a distro and version, but if you miss out any of the other parameters or variables, defaults will kick in. If you fail to supply an “oraver”, for example, will be assumed; if you don’t say whether “rac” should be ‘y’ or ‘n’, a standalone, non-RAC installation will ensue, and so on.

At a minimum, therefore, you will initiate your Oracle Server build by typing something like the following at the bootstrap line:


A complete bootstrap line, leaving nothing to chance, would look more like this:

ks= ksdevice=eth0 oraver=11203

Notice that the URL variables are present as one, continuous string, begun with a “?”, separated by “&” and without any spaces. The Kickstart parameters, however, are supplied as space-separated keyword/value pairs at the end of the URL.

Of course, if you specify variables which imply software choices that your Salisbury Server does not have available to it, you can expect the Oracle Server build to fail. If you say oraver=11203, for example, when you’ve only stored software on the Salisbury Server, then your finished server will have no Oracle software on it at all. If you’ve prepped your Salisbury Server to host all possible distro and Oracle versions, though, then you can specify any of the available options in whatever combination and expect a completely automated O/S and Oracle software installation accordingly.

Oracle Servers built via Salisbury will end up with a root user (password dizwell) and an oracle user (password oracle). You can change either or both of these passwords after installation, of course.

Non-RAC Oracle Servers will have a version of the Oracle software installed. No database will exist, but a shell script will have been created in the /home/oracle directory. Running that (as the oracle user) will result in the automatic creation of a database called orcl. SYS, SYSTEM and other administrative passwords will be set to oracle, but these can be changed using standard database commands at any time.

RAC Oracle Servers will have no software automatically installed, but an /osource directory will have been created, within which are database and grid directories containing the appropriate unpacked Oracle software. The software is therefore immediately ready for installation, whenever you’re satisfied that the entire cluster is up and running.

All Oracle Servers will be built with mounts of NFS shares made available by the Salisbury Server itself. There are two such mounts: /gdata and /ddata, which correspond to the Salisbury Server’s /griddata and /dbdata directories. Non-RAC Oracle Servers can just ignore the existence of these shares, but RAC Oracle Servers can make use of them during the Grid and Database software installs to store grid and database components on a shared file system. It is assumed that RAC Servers will use their own, local, non-shared file systems for storing the Oracle software components.

Both Salisbury and Oracle Servers can be managed remotely with Webmin (point a browser to it at port 10000). Both can also be monitored at the command line with nmon.

Oracle Servers will have rlwrap capability baked-in, so local SQL*Plus sessions will make use of it to provide a scroll-able command line history (that is, you can hit the up- and down-arrow keys in SQL*Plus to retrieve previously-typed SQL statements). Should anyone have ideas for what other software components would be useful to add to either the Salisbury or Oracle servers (or both), please feel free to drop me a line. If it’s useful and do-able, I’ll do it!

Note that both the Salisbury and Oracle Server builds are fatal to anything that might already be sitting on the hard disk of the servers involved: Kickstart is used to simply wipe all partitions it finds. Don’t point Salisbury at pre-loved servers that contain vitally-important data, therefore: you will lose it all if you do.


Salisbury is obviously a lot more complicated to describe than Gladstone! In practice, though, you should find it hands-free, highly automatic and, basically, a piece of cake to use.

The complexity arises because it’s an infrastructure, not a script -though it’s an infrastructure that bootstraps itself into existence courtesy of Kickstart scripts.

It depends on several version-dependent components, of course: Kickstart scripts designed for version 6.x RCSL distros won’t work with version 5.x RCSL distros, for example. Similarly, response files that perform perfect Oracle installs blow up spectacularly when confronted with software. I don’t expect Salisbury to cope with arrival of Red Hat 7 and Oracle 12 without a degree of pain, therefore! I do believe, though, that its underlying techniques and technologies are flexible and extensible enough to be able to cope as the future does its worst.

It’s taken quite some weeks to get it to this state: I hope someone out there finds it as useful as I have!