Have you been paying attention to the rise-and-rise of systemd? First it came for Fedora, and I said nothing, because I didn’t use Fedora (or if I did, I didn’t use it long enough. I’ve swapped OSes about 14 times in the three years since systemd first appeared!). Then it came for Ubuntu (which I’ve never used, so I wouldn’t have said anything about it then, either). And this year it came for Red Hat 7 (and CentOS 7), rendering all my automation scripts useless at a stroke, and I did have a few things to say about it which weren’t terribly complimentary.
Debian also signed up to be absorbed by the systemd borg earlier this year, and last month a bunch of Debian developers decided to fork Debian in response. Whether their proposed “devuan” distro ever makes it beyond concept stage is going to be interesting to watch, but even if it doesn’t, it shows that there’s something of a civil war about systemd. A lot of people (including me, for whatever that’s worth) don’t like it.
Why? Because it started out as a way of initializing Linux (replacing ye ancient System V init, with its myriad shell scripts), but has rapidly morphed into a suite of 69 closely-coupled binaries that do everything from handling logins to detecting new hardware as it’s plugged in and assigning names dynamically to network devices which vary depending on your hardware vendor. It does all its work through variously cryptic or obscure commands and logs its efforts in binary logs (so no awking or grepping them for you!) The lead developers tend to think any bugs discovered are problems in other people’s code, not theirs; which has earned them an expletive-laden dressing down from Linus Torvalds himself in the past.
For many, the big issue with systemd is that it wraps so much functionality into itself, it’s morphing into an operating system all on its own that just happens to use Linux as its kernel. The old days of plugging together components from many disparate sources and wrapping them all up as a distro are passing in the pursuit of ‘coherence’ and consistency. This has immediate impacts on other operating systems out there: the systemd developers have explicitly declared they are not going to make systemd portable to OSes like the BSDs, for example. Since Gnome has made systemd an external dependency, too, this makes the desktop environment situation on the BSDs and other non-systemd OSes more problematic than it should be.
The analogy I use is Solaris, where to change IP address you used to edit /etc/hosts, various files in /etc/inet and a dollop of ifconfig to be going on with …and now everything is done by calling ipadm which does all the text file editing for you in the background. It’s a slicker interface for sure, but I’m not so sure it makes for a nicer experience of Unix, for you feel as if you’ve lost control (and understanding of what’s doing what and why).
The another analogy to use is Windows: by wrapping every core piece of functionality into opaque executables that output obscure binaries and with massive inter-dependencies, the systemd guys have essentially re-invented svchost and the Microsoft approach to system management!
I suspect Linux will get slicker and more consistent to manage as systemd takes over, but it won’t be the Linux of yore -which, of course, makes me an old fuddy-duddy who’s resistant to change. But I’m not convinced this particular sweeping change is required -and it’s being handled by a bunch of developers who haven’t inspired much confidence in the sort of people who you ought to be able to inspire confidence in if you’re going to attempt such profound changes to an OS.
It’s made me rethink my choice of operating systems again, anyway: I shall definitely be sticking with Debian on my file servers, but I shan’t be upgrading to Jessie when it’s released in a month or two’s time. On my desktop, I don’t know: currently (and since August) it’s Linux Mint Debian Edition, but LMDE is due to switch to using Jessie sources once they are stable, and at that point, systemd comes to a PC near me unless I take avoidance action. Weirdly, there’s not lot of choice: with most of the mainstream distros succumbing to systemd, it’s looking more and more as if Windows 8.1 (and eventually 10) will be making an appearance on my hard disk once more. Or maybe I’ll have to buy a Mac, if ToH permits (which is unlikely!)
Watch the systemd space, anyway: there are revolutionary times ahead, and a lot of them won’t be pleasant, I think.
I hesitate to draw comparisons between me and Michaelangelo doing the Sistine Chapel, but I am reminded of Rex Harrison (ok, the Pope) forever asking Charlton Heston (ok, Michaelangelo): “When will you make an end of it!” And Heston,Michaelangelo tartly replying, “When I’m done!”
And that completes the Churchill framework. Use it to its full and you now end up with two two-node clusters, one of which becomes a primary 2-node RAC, leaving the other to become a 2-node RAC running a standby copy of the primary; one network services server and its backup (Churchill and Attlee), and a beefy Enterprise Manager server to keep an eye on everything else (Wilson).
It’s quite a nice environment in which to try out things like patching, failover and switchover, configuring Cloud Control to send meaningful alerts, and so on. Happily, I’ve used 22.214.171.124, 126.96.36.199 and 188.8.131.52 with equally good results, and on top of CentOS 6.5 and 6.6, with an OEL 6.5 in there somewhere too.
My main desktop runs this complete infrastructure very nicely; my Toshiba (16GB RAM, 1.5TB solid state HDD) runs it just as nicely, though I have to dial down the memory numbers for my database VMs. My poor, ageing HP Folio 13 (8GB RAM, 256GB Solid State) has no problem with a 2-node RAC and an Enterprise Manager, but gettng it to do an additional 2 nodes for Active Data Guard practice is pushing it a bit. Maybe I should buy more solid state hard drive for it?! There’s a thought…
It’s everything I’ve ever wanted in a ‘build Oracle properly, routinely, accurately, automatically’ tool.
As the Ferryman in Benjamin Britten’s Opera “Curlew River” puts it, “Today is an important day”.
For today would have been Britten’s 101st birthday. Exactly one year ago today, I was settling down at the back of the Maltings Concert Hall, Snape for the Centenary concert (and a good one it was, too!) Twenty-six years ago, I was settling down in my seat at the Wigmore Hall for his 75th anniversary concert. And thus it has often been for more than half my life: today is spent playing pretty much nothing but Britten from dawn to dusk, and we pray that ToH thinks to do the vacuuming tomorrow rather than today!
Birthdays are for giving, of course (as I constantly have to remind ToH!) In this case, I’ve decided to release version 1.3 of Churchill, which has now been tested for 184.108.40.206, 220.127.116.11 and 18.104.22.168, for standalone, RAC, RAC+Data Guard and 12c Cloud Control installations. I’ve also taken the opportunity to tidy things up a lot, so necessary files are housed more appropriately, rather than all being plonked into a single directory. There are some more documentation issues that arise as a result of the clean-up, but those are relatively minor and should be done by tomorrow. Assuming I am not made to do the vacuuming tomorrow as penance…
Update 25th November: Beware of birthday gifts bought in a hurry! The new 1.3 ISO of Churchill was missing a key file (the ksh RPM), without which all attempts to run the root scripts at the end of a Grid Infrastructure install would fail. Oops. Now corrected (without incrementing the version number again: call it “1.3 Update 1” if you like… Microsoft can be such an inspiration!).
As promised, Salisbury and Asquith have been “retired” and have accordingly disappeared from the front page. They can still be reached from the Downloads page, though, should anyone still need them.
Churchill is now very nearly completely documented and replaces both. The only thing still missing is the description of how to create a Wilson server to act as an Enterprise Manager Cloud Control, and that should be finished by the end of the week.
I’ve also set up my own “ownCloud” hosting service and am hosting the Churchill ISO from there rather than from Dropbox. I think it’s all working and the necessary files are available to all, but if you run into any problems, just drop me a line and I’ll get it sorted ASAP.
I’ve been running with dual 24“ monitors for many years, but decided recently that a single 27” might be more useful to me. So I ordered the monitor you see on my left (Acer, B276HL) from Auspcmarket.com.au.
They delivered it a day late and, as a result, I didn’t get it home until about 5 days later. Having excitedly unpacked it, I then discovered this:
At first, I thought it was a chunk of glue sticking on the screen. But a moment of trying to scratch if off having proved fruitless, I took a close look. It then appeared to be more of an indent you’d cause by hitting the screen with a hammer: all the pixels under the ‘dent’ worked, but displayed the wrong sort of colour.
Anyway: no worries. I was very disappointed, and it quite spoiled my evening. But I’ve been doing business with AusPCMarkets since 2007 or thereabouts, at a rate of about $1500 a year. My UPS, 4TB NAS hard disks, a couple of PCs and God-knows what else have all come from them. They’re a bit pricier than other suppliers, but delivery in the central business district is free and their quality is good. So: make a note on their website that I want to return the monitor and get a replacement and all will be well, right?
Well, no, as it turns out. First email reply: “First step is to contact Acer on 1300 723 926 to obtain an Acer NCC reference number.”
Actually, that’s in breach of New South Wales consumer law, which clearly states that “The retailer cannot refuse to help you by sending you to the manufacturer or importer”. (See here for details!)
So I pointed that minor detail out to them and explained that my relationship was between me and AusPC, not with Acer. Would AusPC please authorise a return? And they indeed did so and sent me a returns form to fill in. Problem solved!
Except the next day, I got an email saying, “The Acer people are asking for the following information so that they can organise Pick-up. (1). SNID number located on the unit. (2). Full contact name, phone number and pickup address of where the unit is located for Acer to pickup.”
So I again replied, “I don’t deal with Acer. And I will not return my monitor to Acer. Please reply acknowledging the same”.
And their reply, in full, was: “Thanks for your quick reply.”
I kind of knew we were on a downward spiral at this point, but persisted with “And your quick reply hasn’t acknowledged what I asked you to. Are you going to accept that the monitor will be returned to you or not? Can I please have a yes/no answer?”.
To which they replied, “Awaiting your call for AusPC Pick-up.”.
So that seemed to be a ‘yes’ and AusPC was now undertaking to do the pickup. Things were back on track, and the monitor was indeed picked up that afternoon. Excellent.
Until I emailed this morning an innocent question, “Did it arrive with you OK?”. And they replied, “the AusPC Driver picked up the monitor. The monitor is currently with the supplier.” Again with the supplier nonsense! I asked when I’d be getting my replacement part, and their reply was “Your monitor will have to be inspected and approved for replacement by a qualified Technician from Acer. Once a manufacturing fault has been substantiated and physical damage ruled out – AusPC will promptly despatch a new replacement.”
Or, to put it in plain English, ‘Although we were the suppliers of defective goods to you, we are waiting to see what the manufacturer says before we offer you a replacement’. Which is not what New South Wales consumer law says they are allowed to do.
At this point, I informed them I wanted to cancel the sale all together and wanted a refund. They said, “only when the Acer technicians agree it’s a manufacturing issue” (illegal). And I replied that, given I was without a monitor, I was within my rights to just ring the credit card company and have the entire transaction disputed, resulting in a near-instantaneous refund to me. Their reply came about 3 minutes later: “The Supplier has just informed me that Acer has approved the Return to be processed”.
So, I get my refund, and the transaction is as if it had never been. And I get to spend my $360 somewhere else. No harm done, right?
Wrong. AusPCMarket are, in my view, in flagrant breach of NSW consumer law. They’re even in breach of the terms posted on their own website which talks eloquently about “For parts tested to have manufacturing defects a short period after their invoice date (DOA), we will replace them from our stock where possible” and “Customers must arrange to return the goods to us”. No hint at all there that “customers will be asked to arrange to return the goods to whoever we think they should be returned to” or “For parts tested BY THE MANUFACTURER to have defects…”.
When preparing my inevitable complaint to the NSW Fair Trading officers, I happened to notice that not once, in an exchange of over 12 emails, did they mention ‘sorry’ or ‘apology’ or ‘shame about the inconvenience’. Never mind their flagrant breach of NSW consumer law: these people are shysters of the first order.
Remember people: if Bloggs and Co sell you faulty products, no matter how innocently, then your course of complaint and restitution lies with Bloggs and Co, not with whomever Bloggs and Co do business. This is true in NSW and England (Sale of Goods Act), at least.
And if it’s not true in your particular jurisdiction, then all I can say is: make sure you don’t try doing business with AusPCMarket. Responsibility-dodging bastards that they are.
CentOS released the 6th update to their version 6 distro at the end of October, just two weeks on from Red Hat’s original release. Clearly, the new(ish) relationship between Red Hat and CentOS is paying dividends.
The usual round of framework and documentation updates now follows at chez Dizwell, of course. Wouldn’t want my Churchill articles to suggest that 6.5 is the latest version it works on, for example!
Suppose that about six weeks ago you, as a proactive kind of DBA, had noticed that your 2TB database was running at about 80% disk usage and had accordingly asked the SysAdmin to provision an additional 2TB slice of the SAN so that you could then add a new ASM disk to your database.
Imagine that the SysAdmin had provisioned as requested, and you as the DBA had applied the change in the form of adding a new ASM disk to your production instance -and that, in consequence, you’d been running at a much healthier 50% disk usage ever since. You’d probably feel pretty good at having been so proactive and helpful in avoiding space problems, right?
Suppose that weeks pass and it is now late October…
Now imagine that for some reason or other that made sense at the time, you kick off a new metadata-only Data Pump export which, workplace distractions being commonplace, you lose sight of, until 6 hours after you started it, you get told there’s a Sev 1 because the non-ASM, standard file system to which your data_pump_dir points has hit 100% usage and there’s no more free space. Foolish DBA!
But no matter, right? You just kill the export job, clear up the relevant hard disk… suddenly the OS is happy there’s space once more on its main hard disk.
But pile up the hypotheticals: the OS reports itself happy, but suppose you nevertheless discover that as a result of the space problems caused by your export, none of the nodeapps are listed as running on Node 1 and any attempt to start them with svrctl on node ends with an error message to the effect that it can’t contact the OHASD/CRSD on that node.
Suppose GV$INSTANCE still returns a count of 2: Node 1 is therefore still up, but no-one can connect to it, because no services are able to run on it. Basically, your Node 1 has kind-of left the building and the only possibility of getting it back, you might reasonably think, would be a whole node reboot. Thank God Node 2 is still up and has no difficulty working alone for a few hours! It’s good enough to cope with the rest of the day’s workload anyway.
So, in this hypothetical house of horrors, suppose that you arrange a schedule outage in which you will reboot Node 1 and wait for it to come back up as a fully-fledged cluster member once more. It should only be a matter of moments before Node 1 is back to its normal happy state, noticing that the non-ASM disk has loads of space once more, right?
Only, imagine that it doesn’t. Imagine instead that it takes at least 10 minutes to restart and, in fact, it’s response-less at that point and looking like it might take another 10 minutes more. Imagine, indeed, that after another 10 minutes on top of that lot, maybe you look at the ASM alert log for Node 1 and find these entries:
ORA-15032: not all alterations performed
ORA-15040: diskgroup is incomplete
ORA-15042: ASM disk "1" is missing from group number "2"
At this point, hypothetically… you might start adding 2 and 2 together and getting an approximation of 4: for you would know that disk 1 is the new 2TB one that you added to the database way back in September.
But why would that new disk, which has been in daily and heavy use ever since, be posing a problem now, rather than before now? You might start idly wondering whether, potentially, when it was provisioned, it was provisioned incorrectly somehow. This being the first reboot since that time, tonight (for it is now past midnight) is maybe the first opportunity which that mis-provisioning has had a chance to reveal itself?
You might at this point very well make a big mental note: on no account reboot node 2, because if it loses the ability to read ASM disks too the entire primary site will have been destroyed.
It would make for an interesting night, wouldn’t it? Especially if the SysAdmin who did the disk provisioning back in September was no longer available for consultation because he was on paternity leave. In New Zealand.
What might you as the DBA do about this state of affairs? Apart from panic, I mean?!
Well, first I think you might very well get your manager to call the SysAdmin and get him off paternity leave in a hurry -and he might take a quick look over the disks and confirm that he’d partitioned the disk back in September to start from cylinder 0… which is, er… a big no-no.
It is, in fact, perhaps the biggest no-no you can do when provisioning disk space for Oracle ASM. This is because doing so means your physical partition table starts at cylinder 0… but, unfortunately, Oracle’s ASM-specific information gets written at the beginning of the disk you give it, so it over-writes the partition table information with its own ASM-specific data. When ASM data replaces disk partition data… you don’t have any disk partitions anymore. Though you won’t know about it yet, because the disk partition information was read into memory at the time the disk was added and has thus been readable ever since.
To stop that happening, you’re supposed to make sure you start your partitions at something other than cylinder 0. Then Solaris can write partition info literally at cylinder 0, and Oracle’s ASM data can start… NOT at cylinder 0!
Apparently, the only operating system that even allows you to add cylinder-0-partitioned disks is Solaris: Oracle on other operating systems spots the potential for disaster and prevents you from adding it in the first place. Tough luck if, in this hypothetical situation, you’re stuck using Solaris, then!
Until you try and re-mount a disk after a reboot, you don’t know the partition table has been destroyed by Oracle’s ASM shenanigans. The partition information is in memory and the operating system is therefore happy. You can run like this forever… until you reboot the affected server, at which point the ASM over-write of the disk partition information proves fatal.
The second thing you might do is raise a severity 1 SR with Oracle to see if there’s any possible way of fixing the partition table on this disk without destroying it’s ASM-ness. However, Oracle support being what it is, chances are good that they will simply hum-and-haw and make dark noises about checking your backups. (Have you ever restored a 2TB database from tape? I imagine it might take one or two days…or weeks…)
So then you might start thinking: we have a Data Guard set up. Let’s declare a disaster, switch over to the secondary site, and thus free up the primary’s disks for being re-partitioned correctly. And at this point, hypothetically of course, you might then realise that when we added a disk to the ASM groups back in September on primary… er… we probably also did exactly the same on the standby!
This means (or would mean, because this is just hypothetical, right?!) that our disaster recovery site would be just as vulnerable to an inadvertent reboot or power outage as our primary is. And then you’d probably get the sysadmin who’s been contacted by phone to check the standby site and confirm your worst suspicions: the standby site is vulnerable.
At this point, you would have a single primary node running, provided it didn’t reboot for any reason. And a Data Guard site running, so long as it didn’t need to reboot. That warm glow of ‘my data is protected’ you would have been feeling about 12 hours ago would have long since disappeared.
Hypothetically speaking, you’ve just broken your primary and the disaster recovery site you were relying on to get you out of that fix is itself one power failure away from total loss. In which case, your multi-terabyte database that runs the entire city’s public transport system would cease to exist, for at least several days whilst a restore from tape took place.
If only they had decided to use ‘normal redundancy’ on their ASM disk groups! For then you would be able to drop the bad disk forcibly and know that other copies of data stored on the remaining good disks would suffice. But alas, they (hypothetically) had adopted external redundancy, for it runs on a SAN and SANs never go wrong…
At this point, you’ve been up in the wee small hours of the night for over 12 hours, but you might nevertheless come up with a cunning plan: use the fact that node 2 is still up (just!) and get it to add a new, good disk to the disk group and re-balance. The data is distributed off the badly-configured disk onto the new one (which you’ve made triply sure was not partitioned at cylinder 0!)
You could then drop the badly-configured disk, using standard ASM ‘drop disk’ commands. The data would then be moved off the bad disks onto the good ones. You could then remove the bad disk from the ASM array and your Data Guard site would, at least, be protected from complete failure once more.
Of course, Oracle support might tell you that it won’t work, because you can’t drop a disk group with external redundancy… because they seem to have forgotten that the second node is still running. And you’ve certainly never tried this before, so you’re basically testing really critical stuff out on your production standby site first. But what choice do you have, realistically?!
So, hypothetically of course, you’d do it. You’d add a disk, wait for a rebalance to complete (and notice that ASM’s ability to predict when a rebalance operation is finished is pretty hopeless: if it tells you 25 minutes, it means an hour and a half). And then you’d drop a disk and wait for a rebalance to complete. And then you’d reboot one of the Data Guard nodes… and when it failed to come back up, you might slump in the Slough of Despond and declare failure. Managers being by this time very supportive, they might propose that we abandon in-house efforts to achieve a fix, and call in Oracle technical staff for on-site help. And that decision having been taken in an emergency meeting, you might idly re-glance at your Data Guard site and discover that not only is +ASM1 instance up and running after all, but so is the database instance #1. It’s actually all come up fine, but you had lacked the patience to wait for it to sort itself out and had declared failure prematurely. Impatient DBA!
Flushed with the (eventual) success of getting the Data Guard site running on all-known-good-disks, you might want to hurry up and get the primary site repaired in like manner. Only this is a production environment under heavy change management control, so you’ll likely be told it can only be fiddled with at 11pm. So you would be looking at having worked 45 hours non-stop before the fix is in.
Nevertheless, hypothetically, you might manage to stay up until 11pm, perform the same add/rebalance/drop/rebalance/reboot trick on the primary’s node 2… and, at around 3am, discover yourself the proud owner of a fully-functioning 2-node RAC cluster once again.
(The point being here that Node 2 on the primary was never rebooted, though that reboot had been scheduled to happen and the SysAdmin sometimes reboots both nodes at the same time, to ‘speed things up’ a bit! Had it been rebooted, it too would have failed to come back up and the entire primary site would have been lost, requiring a failover from the now-re-protected standby. But since Node 2 is still available, it can still do ASM re-structuring, using the ‘add-good-disk; rebalance; drop bad-disk; rebalance’ technique.)
There might be a little bit of pride at having been able to calmly and methodically work out a solution to a problem that seemed initially intractable. A bit of pleasure that you managed to save a database from having to be restored from tape (with an associated outage measured in days that would have cost the company millions). There might even be a bit of relief that it wasn’t you letting an export consume too much disk space that was the root cause, but a sysadmin partitioning a disk incorrectly weeks ago.
It would make for an interesting couple of days, I think. If it was not, obviously and entirely, hypothetical. Wouldn’t it??!
Salisbury and Asquith, my ‘frameworks’ for automated, nearly-hands-free building of Oracle servers, are retiring. Which is to say, I’m not going to maintain them any more.
My attempts over the years to persuade my System Admin colleagues at work that RAC via NFS (as Salisbury uses) might be a good idea have all fallen on deaf ears, Kevin Closson’s fine articles on the subject notwithstanding. So Salisbury became a bit of a dead end after that, which is why I cooked up Asquith. Asquith uses real iSCSI (as real as anything a virtual environment can cook up, anyway!) and layers ASM on top of that and thus provided me with a playground that much more faithfully reflects what we do in our production environment.
But it’s a pain having two frameworks doing pretty much the same job. So now I’m phasing them out and replacing them with Churchill. The Churchill framework uses NFS (because it’s much easier to automate the configuration of that than it is of iSCSI), but it then creates fake hard disks in the NFS shares and layers ASM on top of the fake hard disks. So you end up with a RAC that uses ASM, but without the convoluted configuration previously needed.
The other thing we do in production at work is split the ownership of the Grid Infrastructure and the Database (I don’t know why they decided to do that: it was before my time. The thing is all administered by one person -me!- anyway, so the split ownership is just an annoyance as far as I’m concerned). Since I’ve been bitten on at least one significant occasion where mono- and dual-ownership models do things differently, I thought I might as well make Churchill dual-ownership aware. You don’t have to do it that way: Churchill will let you build a RAC with everything owned by ‘oracle’ if you like. But it does it by default, so you end up with ‘grid’ and ‘oracle’ users owning different bits of the cluster, unless you demand otherwise.
Other minor changes from Asquith/Salisbury: Churchill doesn’t do Oracle 22.214.171.124 installations, since that version’s now well past support. You can build Churchill infrastructures with 126.96.36.199, 188.8.131.52 and 184.108.40.206. Of those, only the last is freely available from otn.oracle.com.
Additionally, the bootstrap lines have changed a little. You now invoke Alpher/Bethe installations by a reference to “ks.php” instead of “kickstart.php” (I don’t like typing much!). And there’s a new bootstrap parameter: “split=y” or “split=n”. That turns on or off the split ownership model I mentioned earlier. By default, “split” will be “y”.
Finally, I’ve made the whole thing about 5 times smaller than before by the simple expedient of removing the Webmin web-based system administration tool from the ISO download. I thought it was a good idea at the time to include it for Asquith and Salisbury but, in fact, I’ve never subsequently used it and it made the framework ISO downloads much bigger than they needed to be. Cost/benefit wasn’t difficult to do: Webmin is gone (you can always obtain it yourself and add it to your servers by hand, of course).
The thing works and is ready for download right now. However, it will take me quite some time to write up the various articles and so on, so bear with me on that score. All the documentation, as it gets written, will be accessible from here.
The short version, though, is you can build a 2-node RAC and a 2-node Active data guard with six basic commands:
ks=hd:sr1/churchill.ks (to build the Churchill Server)
With Churchill and the rest of the crew, I can now build a pretty faithful replica of my production environment in around 2 hours. Not bad.
Salisbury and Asquith will remain available from the front page until the Churchill documentation is complete; after that, they’ll disappear from the front page but remain available for download from the Downloads page, should anyone still want them.
My family had an ancient upright piano in the front room of our terraced house in Kent. I guess most people did back then, though even by the 1960s, I’d say it was becoming a little old-fashioned to do so. I distinctly remember as a three year-old suffering from insomnia and sneaking down to the front room at the earliest possible opportunity and banging away on the keyboard, waking the house in the process (and the next door neighbours too, I’ve no doubt). Some parents might have seen in this keenness to play some hints of musical genius and thus encouraged it by all means possible. Benjamin Britten’s mum did, for example, so that he was writing symphonies and tone poems aged four and five.
My father was cut from somewhat different cloth, however, and decided instead to sell the piano to Dr. Shaw, the family GP. Thus was my music career abruptly terminated, aged 3.
Well, I’ve finally wrought the revenge I planned for 47 years:
It is an el-cheapo, second-hand job from a music teacher up in Gosford (so it’s seen a few ham-fisted students, I expect). But it looks the part, sounds OK, and finally gives me something I can learn on. Two hours a month of paid lessons coming up shortly. Apparently, shoes will be compulsory, too.
My piano ambitions remain modest: a rollicking rendition of “Roll Out The Barrell” will do me nicely. I have a deal with ToH, though: the day I manage to play (most of) Rachmaninov’s third piano concerto is the day we go out and buy a nice, concert-grade harpsichord.
Funnily enough, the few exercises I’ve already been practising on the new instrument have helped my computer keyboard work immensely. My left hand has been sitting idle all these years and I never really noticed. I think all DBAs should probably learn the piano in consequence.
I’ve decided: There will be no Asquith 2.0 that runs on Red Hat/CentOS 7.
There are a lot of stumbling blocks, some of which I’ve documented here recently -including things like iSCSI target configurations no longer being easily scriptable, the use of systemd and the use of dynamic names for network devices. No doubt, all of these problems will be resolved over time by the upstream developers, but they currently make it practically impossible to construct a highly-automated, self-building Asquith framework. (Salisbury, needing only NFS, is a much better proposition, but even there the network device naming issue presents automation difficulties).
Since Red Hat 6.x is supported until 2020, I’ll pass on Red Hat 7 and its assorted clones. I rather imagine quite a lot of real-life enterprises might do the same!