In the context of this blog, however, his is the name that will be attached to a new way of auto-building Oracle servers, of the standalone, RAC and RAC+Data Guard variety.
Salisbury, of course, has been doing that job for several months now, so why the need for Asquith? Well… Salisbury works fine… but is maybe not very realistic, in the sense that Salisbury’s use of NFS for shared storage has put some people off. So Asquith is effectively the same as Salisbury -except that he uses ASM for his shared storage, not NFS.
In my view, that perhaps makes him a little more ‘realistic’ than the Salisbury approach, but definitely results in a more useful learning environment (because now you can get to play with the delights of ASM disk groups and so forth, which is an important part of managing many production environments these days).
1. Asquith v. Salisbury
Other than his choice of storage, however, Asquith is pretty much identical to Salisbury: an Asquith server, just like a Salisbury server, provides NTP, DNS and other network services to the ‘client servers’, which can be standalone Oracle servers, part of a multi-node RAC or even part of a multi-node, multi-site Data Guard setup. If you’re doing RAC, the shared storage needed by each RAC node is provided by Asquith acting as an iSCSI target. The clients act in their turn as iSCSI initiators.
The only other significant difference between Salisbury and Asquith is that Asquith never auto-builds a database for you, not even in standalone mode. I figured that if you’re going to go to the trouble of using ASM, you’re doing ‘advanced stuff’, and don’t need databases auto-created for you. If automatic-everything is what you’re after, therefore, stick to using Salisbury. For this reason, too, Asquith does not provide an auto-start script for databases: since it uses ASM, it’s assumed you’ll install Oracle’s Grid software -and that provides the Oracle Restart utility which automates database restarts anyway. A home-brew script is therefore neither needed nor desirable.
All-in-all, Asquith is so similar to Salisbury that I’ve decided that the first release of Asquith should be called version 1.04, because that’s the release number of the current version of Salisbury. They will continue to be kept in lock-step for all future releases.
And this hopefully also makes it clear that Asquith doesn’t make Salisbury redundant: both will continue to be developed and updated, and each complements the other. It’s simply a question of which shared storage technology you prefer to use. If you like the simplicity of NFS and traditional-looking file systems, use Salisbury. If you want to learn and get familiar with ASM technology, then use Asquith. Each has its place, in other words, and both are useful.
2. Building an Asquith Server
In true Salisbury fashion, the job of building the Asquith server itself is completely automated, apart from you pointing to the asquith.ks kickstart file when first building it.
Your Asquith server can run OEL 6.x, Scientific Linux 6.x or CentOS 6.x -where x is either 3 or 4. In all cases, only 64-bit OSes are allowed. The Oracle versions its supports, like Salisbury, are 18.104.22.168, 22.214.171.124 or 126.96.36.199 The Asquith server needs a minimum of 60GB disk space, 512MB RAM, one network card and two DVD drives. The O/S installation disk goes in the first one; the Asquith ISO goes in the second.
The server is built by hitting <Tab> when the installation menu appears, and typing this on the bootstrap line:
Once built, you need to copy your Oracle software to the /var/www/html directory of the new Asquith server, using file names of a specific and precise format. Depending on which version you intend to install on other client servers, you need to end up with files called:
You can, of course, have all 10 files present in the same /var/www/html directory if you intend to build a variety of Oracle servers running assorted different Oracle versions.
You can additionally (but entirely optionally) copy extra O/S installation media to the /var/www/html directory if you want future ‘client’ servers to use an O/S different to that used to build Asquith itself. Asquith automatically copies its own installation media to the correct sub-directories off that /var/www/html folder -so if you used CentOS 6.4 to build Asquith, you’ll already have a /var/www/html/centos/64 directory from which clients can pull their installation media. You would need to copy the DVD1 installation media for OEL and Scientific Linux to corresponding “oel/xx” and “sl/xx” sub-directories if you wanted to use all three Red Hat clones for the ‘client’ servers (where ‘xx’ can be either 63 or 64).
3. Building Asquith Clients
When building Asquith clients, you need to boot them with appropriate, locally-attached installation media. The netinstall disks for each distro are suitable, for example.The distro/version you boot with will be the distro/version your Asquith client will end up running. You cannot, for example, boot with a Scientific Linux netinstall disk, point it at Asquith and hope to complete a CentOS kickstart installation. As a consequence, what you boot your clients with must match something you’ve already copied to Asquith in full. If you boot a client with an OEL 6.4 netinstall disk, the DVD 1 media for Oracle Enterprise Linux 6.4 must already have been copied to Asquith’s own /var/www/html/oel/64 directory, in other words.
4. Asquith Bootstrap Parameters
You build an Asquith client by again pressing <Tab> on the boot menu at initial startup and then passing various parameters to the bootstrap line that’s then revealed. All bootstrap lines must start:
You then add additional parameters as follows:
|Parameter||Compulsory?||Possible Values (case sensitive)|
|distro||Yes||centos, oel or sl|
|version||Yes||63 or 64|
|hostname||No||any valid name for the server being built|
|domain||No||any valid domain name of which the server is a part|
|rac||No||Is this server to be part of a RAC? If so, it will find its shared storage on the Asquith server. If not, no shared storage will be configured (any future database would be stored on the local server’s disk).|
|ip||No||IP of the server (the public IP if a RAC)|
|ic||No||IP of the server’s interconnect (if it’s to be part of a RAC)|
|dg||No||Is this server to be part of a Data Guard site? If so, it will find its shared storage on a Rosebery server, not on Asquith.|
The parameters can come in any order, separated by ampersands (i.e., by the & character), and there must be no spaces between them. For example:
(That example might wrap here, but is in fact typed continuously, without any line breaks or spaces).
Note that “rac=” and “dg=” are mutually exclusive. One causes the built server to use Asquith as its source of shared storage; the other directs the server to use Rosebery for its shared storage (I’ll talk more about Rosebery in Section 7 below). If your Data Guard servers are themselves to be part of a cluster, therefore, you just say “dg=y”, not “rac=y&dg=y”.
After you construct an appropriate bootstrap line, you must additionally add three space-separated Kickstart constants, as follows:
|ksdevice=||No||eth0, eth1 or any other valid name for a network interface|
|oraver=||Yes||11201, 11203, 12101 or NONE|
|filecopy=||No||y or n|
ksdevice and filecopy are only relevant if you’re building a RAC: a RAC node must have two network cards, and you use ksdevice to say which of them should be used for installation purposes. The usual answer is eth0. If you miss this constant off, the O/S installer itself will prompt you for the answer, so you only need to supply one now if you want a fully-automated O/S install.
The second node of a RAC needs to have paths and environment variables set up in anticipation of Oracle software being ‘pushed’ to it from the primary node -but it itself doesn’t need a direct copy of the Oracle installation software. Hence ‘filecopy=n’ will suppress the copying of the oradb…zip files from Asquith to the node. If you miss this constant off, an answer of ‘y’ will default, which will mean about 4GB of disk space may be consumed unnecessarily. It’s not the end of the world if it happens, though.
The oraver constant is required, though. It lets the server build process create appropriate environment variables and directories, suitable for running Oracle eventually. You can only specify 11201, 11203 or 12101 depending on which version of Oracle you intend, ultimately, to run on the new server. If you don’t ever intend to run Oracle on your new server, you can say “oraver=none”, and after a basic O/S install, nothing else will be configured on the new server.
A complete bootstrap line, suitable for the first node of an intended 2-node RAC, might therefore look like this:
ks=http://192.168.8.250/kickstart.php?distro=centos&version=64&hostname=my_racnode1&domain=dizwell.com&rac=y&ip=188.8.131.52&ic=10.0.0.1 oraver=12101 filecopy=y ksdevice=eth0
Notice there are spaces between the three constants, and between them and the original part of the bootstrap line. Here’s another example, this time for the second node of a Data Guard RAC:
ks=http://192.168.8.250/kickstart.php?distro=oel&version=63&hostname=my_dgnode2&domain=dizwell.com&dg=y&ip=184.108.40.206&ic=10.0.0.6 oraver=12101 filecopy=n ksdevice=eth0
5. Asquith Speed Keys
It’s not really that much typing when you come to do it, but if you want to make things even quicker, there are four ‘speed keys’ available to you:
|sk=1||The server will be called alpher.dizwell.home, with IP 192.168.8.101 and Interconnect IP of 10.0.0.101. It will run as the first node of a RAC and is configured to look to Asquith as its shared storage source.|
|sk=2||The server will be called bethe.dizwell.home, with IP 192.168.8.102 and Interconnect IP of 10.0.0.102. It will run as the second node of a RAC and is configured to look to Asquith as its shared storage source.|
|sk=3||The server will be called gamow.dizwell.home, with IP 192.168.8.103 and Interconnect IP of 10.0.0.103. It will run as the first node of a RAC but is configured to look to Rosebery as its shared storage source.|
|sk=4||The server will be called dalton.dizwell.home, with IP 192.168.8.104 and Interconnect IP of 10.0.0.104. It will run as the first node of a RAC and is configured to look to Rosebery as its shared storage source.|
If you want to use one of these speed keys, your bootstrap line becomes:
ks=http://192.168.8.250/kickstart.ks?sk=2 oraver=11203 filecopy=n ksdevice=eth0
Note that you still have to supply the three Kickstart constants -but at least you don’t have to supply any of the normal parameters. In fact, you only have to supply the oraver constant, so it could be even shorter to type, if you’d prefer.
6. Creating Databases and Clusters
All Asquith client servers end up being created with a root user, whose password is dizwell and an oracle user whose password is oracle. Use the operating system’s own passwd command to alter those after the O/S installation is complete if you like.
All Asquith client servers are also built with an appropriate set of Oracle software (if requested), stored in the /osource directory. Grid/Clusterware will be in the /osource/grid directory and the main Oracle RDBMS software will be in the /osource/databasedirectory. Your job is therefore simply to launch the relevant installer, like so:
If you don’t want to run a RAC or use ASM, just pretend the grid software’s not there! If you do, standard operating procedures apply:
Run the /osource/grid/runInstaller
Do an advanced installation
Select to use ASM, keep the default DATA diskgroup name
Change the Disk Discovery Path to be /dev/asm*
Use External redundancy levels (at this stage, Asquith doesn’t do redundancy)
Click ‘Ignore All’ if any ‘issues’ are discovered
Run the root scripts on the various nodes when prompted
Once the Clusterware is installed, you can install the database in the usual way:
Do a typical installation
Select to use Automatic Storage Management -the DATA disk group should be automatically available
Supply passwords where appropriate
Ignore any prerequsite failures
Run the root script when prompted.
It’s all pretty painless, really -which is precisely the point!
Just as a Salisbury server is accompanied by a Balfour server when building a Data Guard environment, so Asquith has his Rosebery. (Archibald Primrose, 5th Earl of Rosebery, Prime Minister of Great Britain 1895-1896). A Rosebery server is built in the same way as an Asquith server (that is, 60GB hard disk minimum; 512MB RAM minimum, 1 NIC), but doesn’t need a second DVD drive from which to find its kickstart file: for that, you simply point it at Asquith.
The bootstrap line to build a Rosebery server is thus:
After that, the Rosebery server builds automatically. It then provides a new iSCSI target for client servers built with the dg=y parameter in their bootstrap lines to connect to. In short, Rosebery provides shared storage to clients, just as Asquith does -and therefore provides a secondary, independent storage sub-system for Data Guard clients to make use of.
Asquith (and Rosebery) provide a conveniently-built infrastructure in which Standalone, RAC and Data Guard Oracle servers can be constructed with ease. It automates away a lot of the network and storage “magic” that is usually the preserve of the professional Systems Administrator, leaving the would-be Oracle Database Administrator to concentrate on actual databases! By employing ASM as its shared storage technology, Asquith/Rosebery allow the DBA to explore and learn an important aspect of modern Oracle database management.
I’ll be putting up a section of the site for Asquith to match the one that already exists for Salisbury. Until then, the only place for any Asquith documentation (and the only link to download the all-important Asquith ISO) is this article itself.