Fedora ZFS

Further to my recent post, the good news is that the ZFS team have released a version of ZFS which will work on the latest Fedora:

The 0.6.5.9 version supports kernels up to 4.10 (which is handy, as that’s the next kernel release I can upgrade my Fedora 25 to, so there’s some future-proofing going on there at last). And I can confirm that, indeed, ZFS installs correctly now.

I’m still a bit dubious about ZFS on a fast-moving distro such as Fedora, though, because just a single kernel update could potentially render your data inaccessible until the good folk at the ZFS development team decide to release new kernel drivers to match. But the situation is at least better than it was.

With a move to the UK pending, I am disinclined to suddenly start wiping terabytes of hard disk and dabbling with a new file system… but give it a few weeks and who knows?!

Finished

Finished! And only three weeks late!

I’m referring to Churchill 1.7, a major overhaul of the Churchill framework and its accompanying documentation, making it work on RHCSL 6.8, for 12c only, with Enterprise Manager Cloud Control 13c and a complete overhaul of the bootstrap options available when kickstarting O/S installations.

At some point toward the end of January, it morphed into practically a complete re-write… and I thought seriously about calling it quits and declaring it to be version 2.0. But I’ve stuck with the incremental versioning for now. (I’ve been saving version 2 for when I get round to making it work with RHCSL 7.x distros).

I’m finished in another sense, though, too: the contract to purchase a house in Nottingham is ready to sign and it accordingly looks very much as though I’ll be becoming an ex-Aussie (or a re-Englishman, I suppose, depending on your point of view) on or around 6th March. I may not have much time to post much here given the packing, flight-booking, passport-checking, Internet banking, etc etc shenanigans that now ensue. If I can I will, but otherwise I’ll be back online toward the end of March, live from Nottingham 🙂

What were the odds?

In modernising Churchill to work for Oracle 12c and the latest 6.x releases of RHCSL, I’ve encountered a bizarre bug (#19476913 if you’re able to check up on it), whereby startup of the cluster stack on a remote node fails if its hostname is longer than (or equal to) the hostname of the local node.

That is, if you are running the Grid Infrastructure installer from Alpher (6 characters) and pushing to Bethe (5 characters) then the CRS starts on Bethe just fine: local 6 is greater than remote 5. But if you are running the GI installer on Gamow (5 characters) and pushing to Dalton (6 characters) then the installer’s attempt to restart the CRS on Dalton will fail, since now local 5 is less than remote 6. Alpher/Bethe managed to dodge this bullet, of course -but only by pure luck.

The symptoms are that during the installation of Grid Infrastructure, all works well until the root scripts are run, at which point (and after a long wait), this pops up:

Poke around in the [Details] of that dialog and you’ll see this:

CRS-2676: Start of 'ora.cssdmonitor' on 'dalton' succeeded 
CRS-2672: Attempting to start 'ora.cssd' on 'dalton' 
CRS-2672: Attempting to start 'ora.diskmon' on 'dalton' 
CRS-2676: Start of 'ora.diskmon' on 'dalton' succeeded 
CRS-2676: Start of 'ora.cssd' on 'dalton' succeeded 
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'dalton' 
CRS-2672: Attempting to start 'ora.ctssd' on 'dalton' 
CRS-2883: Resource 'ora.ctssd' failed during Clusterware stack start. 
CRS-4406: Oracle High Availability Services synchronous start failed. 
CRS-4000: Command Start failed, or completed with errors. 2017/02/18 10:21:41 
CLSRSC-117: Failed to start Oracle Clusterware stack Died at /u01/app/12.1.0/grid/crs/install/crsinstall.pm line 914.

The installation log is not much more useful: it just documents everything starting nicely until it fails for no discernible reason when trying to start ora.ctssd.

Take exactly the same two nodes and do the installation from the Dalton node, though, and everything just works -so it’s not, as I first thought it might be, something to do with networks, firewalls, DNS names resolution or the myriad other things that RAC depends on being ‘right’ before it will work. It’s purely and simply a matter of whether the local node’s name is longer or shorter than the remote node’s!

The problem is fixed in PSU 1 for 12.1.0.2, but it’s inappropriate to mandate its use in Churchill, since that’s supposed to work with the vanilla software available from OTN (I assume my readers lack support contracts, so everything has to work as-supplied from OTN for free).

The obvious fix for Churchill, therefore, is to (a) either make the ‘Gamow’ name one character longer (maybe spell it incorrectly as ‘gammow’?); or find a ‘D’ name that is both a physicist and only 4 characters long or fewer; or (c) change both names ensuring that the second is shorter than the first.

Largely due to the distinct lack of short-named, D-named physicists, I’ve gone for the (c) option: Churchill 1.7 therefore builds its Data Guard cluster using hosts geiger and dirac. Paul Dirac (that’s him on the top-left) was an English theoretical physicist, greatly admired by Richard Feynman (which makes him something of a star in these parts) and invented the relativistic equation of motion for the wave function of the electron. He used his equation to predict the existence of the positron -and of anti-matter in general, something for which he won a share of the 1933 Nobel prize for physics. Geiger is a frankly much less distinguished physicist whose main claim to fame is that he invented (most of) the Geiger counter and wasn’t (apparently) a Nazi. He gets into the Churchill Pantheon by the skin of his initial letter and not much else, to be honest!

Short version then: Churchill 1.7 now uses Alpher/Bethe and Geiger/Dirac clusters, and both Gamow and Dalton are no more. Quite a bit of documentation needs updating to take account of this trivial change! Hopefully, I should have that sorted by the end of the day. And that will teach me to test all parts of Churchill before declaring that ‘it works with 12c’. (Oooops!)

A lick of paint…

Careful readers will note that I’ve re-jigged the look and feel of the place!

I am not entirely sure I like the results as yet, but it’s certainly a bit slicker and (I think) more functional.

If you spot any visual ‘anomalies’ arising, let me know in the comments… and if you can’t stand the new look (or feel it’s the best thing since sliced bread), let me know there, too. These things can always be reversed or tweaked 🙂

Churchill Changes

I’ve just published version 1.7 of Churchill. It contains a lot of changes.

The main one is that I’ve removed a few dependencies on .i686 packages. That means the O/S installations can now all take place off the first DVD alone. No second DVD is prompted for, in other words.

This in turn brings about the biggest single benefit of the new release: it works with CentOS 6.8 (and Scientific Linux 6.8, too).

In fact, Churchill 1.7 now reverts to making CentOS 6.8 the default O/S assumed to be in use. (You can still always specify ‘redhat’, ‘sl’ or ‘oel’ if you prefer to use real Red Hat, Scientific Linux or Oracle Enterprise Linux, of course; and you can always specify an earlier distro version if you prefer to stick with (say) 6.3 -though I can’t think why you’d particularly want to).

The other big change is that the bootstrap lines are now trivially easy. Back in 2013 when I first released Churchill, it seemed like a good idea to make it as flexible as possible so that users could specify their own IP addresses and hostnames; but this just made for really lengthy bootstrap lines and confused the heck out of everybody!

So, the simplify brush has been daubed all over Churchill. You now must use the speed keys 1 to 4 as you build your nodes (or you can instead specify their corresponding hostnames). By specifying the speed key or hostname, you automatically define all the pesky details about IP address and interconnect IP address. It makes things a lot simpler and less confusing, I think. It also makes it a bit less flexible… but that’s the price you pay for simplicity. Ask the Gnome developers!

Other changes flow in consequence: the filecopy=y/n parameter is no longer required. If you are building nodes 1 or 3, file copying is assumed to be ‘yes’; if you are building nodes 2 or 4, it’s assumed to be ‘no’. Likewise, there’s no dg=y/n parameter any more: if you are building nodes 1 or 2, it’s assumed to be ‘no’; but build nodes 3 or 4, it’s assumed to be ‘yes’.

It is still possible to say sk=1&rac=n (or hostname=alpher&rac=n), though: that’s if you are building node 1 but want to use it in standalone mode.

As with the previous release (1.6), Churchill only works to build 12c standalone and RAC/Data Guard environments: there is no Oracle 11g support these days.

These are substantial changes and mean Churchill now works in ways quite different from before. That obviously affects the way it is documented. Those documentation changes are being made as I write and should be ‘live’ by the time you read this. Since the old versions of Churchill are kept available on the old site, I’ll keep the original documentation available on that old site too, at least for now.

On this site, however, only the 1.7 version will be available and documented.