Dizwell Informatics

News from Nowhere

Build a Two-Node RAC

1.0 Introduction

Your next venture in the Churchill framework is to construct a two-node Oracle 12c Real Application Cluster (RAC). We refer to the first of the nodes as Alpher and to the second as Bethe (who is pictured left).

For the steps in this article to work you must (a) have built a Churchill server and have it running in the background; and (b) not have a standalone Alpher server running. If you built Alpher as described in this article, it needs to be shutdown now -because having two different machines with the same hostname and IP address on the private network that Churchill uses is just a recipe for confusion and disaster!

2.0 Hardware Configuration

You will need to construct two new virtual machines, as follows:

  • 2 x CPUs
  • 5120MB RAM
  • 40GB hard disk
  • 2 x host-only network interfaces
  • 1 x CD-ROM drive

The two machines should be identical. Perhaps the trickiest part to get right is the requirement for two different host-only network interfaces -because they need to be built using different virtual networks.

3.0 Virtual Networking

When you built Churchill in the first place, for example, you used your virtualization software’s capabilities to create a ‘vboxnet0’ or ‘vmnet1’ network, operating on the 192.168.8.x subnet. This is the interface that Alpher uses to talk to Churchill; it’s also the interface that Bethe will use to do the same thing. From your physical desktop, too, you can communicate with all three machines using this same interface -so we’ll call it the public interface, because just about every machine can communicate to every other machine on it.

But now you’ll need to create a second network interface entirely. In VirtualBox it will probably get called vboxnet1; in VMware, you can call it anything you like -but vmnet2 seems like a sensible name to start with. This network will be used only by Alpher to talk to Bethe. No-one else can see it or use it. So we’ll call this the private interface, and it should be constructed to use the 10.0.0.x interface.

In VirtualBox, for example, you’d click File -> Preferences, select the Network item and click the Host-only Networks tab. You’d see this sort of thing:

That’s the single host-only interface you created earlier. You need to click the ‘+’ button now and have a second host-only network created for you. The screen will display it with its new auto-generated name:

Click the screwdriver icon to configure it:

The IPv4 address will be filled in automatically for you -but probably not with a 10.0.0.x address. So you replace the auto-generated address with ‘10.0.0.1’ as you see here. Note the new address ends in ‘1’. You might want to click the DHCP Server tab, too, to make sure that DHCP is not switched on for this new interface.

In VMware Workstation, the process is quite similar. You run the Virtual Network Editor tool as you did when building Churchill and you’ll see your existing host-only network interface:

So all you do now is click the [Add Network] button:

You get to choose the name of your new interface (or at least, to choose its numeric component) and then say whether it’s a bridged, NAT or host-only interface. For Churchill purposes, all network interfaces are ‘host-only’. Once you click the [Add] button, you return to the earlier screen:

The new interface exists -but it’s configured incorrectly. At the moment, it’s built to acquire DHCP addresses, and that’s not what we need. So, whilst the new interface is highlighted at the top of the screen, un-check the ‘Use local DHCP service…’ option and type instead a ‘10.0.0.0’ entry in the Subnet IP field:

Note how, in VMware, you end your new IP address in a zero. In VirtualBox, you end it with a ‘1’.

Anyway, once you click [Save], you now have the two host-only networks we need, one public, one private.

You can then build your new Alpher and Bethe servers as previously described. Neither VMware or VirtualBox builds new VMs by default with more than one network interface, so after they’ve been created with one, click the option to configure or change the settings of the VMs and add the second interface manually, taking care to ensure that one interface is assigned to the 192.168.8.x host-only interface and the second to the new 10.0.0.x one.

For example, in VirtualBox, Alpher starts off by looking like this:

So you need to click on the Adapter 2 tab, switch on the ‘Enable Network Adapter’ option, select the ‘Host-only Adapter’ drop-down option and then, in the ‘Name’ field, be sure to select the ‘vboxnet1’ option:

If you’re using VMware, you’d do something similar. When you first build Alpher, you only get a fairly basic choice of networking:

But when you subsequently customize the completed virtual machine, you get to see more details:

So here, you just click [Add], select Network Adapter, and then complete the process by selecting a custom interface and then the new vmnet2 interface you created earlier:

So in VMware, you end up with a VM that uses 1 ‘host-only’ and 1 ‘custom’ interface. In VirtualBox, you use 2 host-only interfaces, but distinguished from each other by their name.

Other virtualization platforms will have their own unique ways of achieving this sort of network configuration. I can’t document them all here, of course, but be clear about your intended end-result and you should be able to work out how to achieve it.

Here’s my completed Alpher build:

Note the 2 processors (CPUs); the 5120MB RAM; the ‘optical drive’ that has CentOS 6.8 loaded into it; and the two distinct network cards. If you can end up with something like that, we’re in business.

Of course, having built Alpher looking like that, you can clone it and call the clone ‘Bethe’ (or build Bethe manually and independently, but to the same specifications -it’s entirely up to you how you go about getting your second node built).

I should also just mention before we go on: your choice of operating system is entirely dictated by what you built Churchill with originally. If you created Churchill as a Scientific Linux 6.7 machine, for example, then that’s what Alpher and Bethe must be, too. You can’t mix-and-match your RHCSL distros, nor their versions.

4.0 Booting Alpher

The way we boot Alpher in this 2-node configuration is very similar to the way we did when building Alpher as a standalone server, so I shall borrow heavily from the previous article’s instructions at this point!

When you are ready (and with Churchill running in the background), boot your new Alpher server. You’ll first see something resembling this boot menu screen:

The specific menu screen you see will depend on your choice of RHCSL distro: if you’re using OEL, for example, your eyes will be bleeding right about now as you deal with the pulsating red mess that is Oracle’s choice of boot menu colour scheme:

But blue or red, you want to make sure you are sitting on the ‘Install or upgrade an existing system‘ menu option… and then press <TAB>. This will reveal a “bootstrap” line:

On to the end of its existing contents, you’re now going to type the command which will tell this new server to build itself using an auto-configuration that can be found and fetched from the Churchill server.

The basic form of this new bootstrap command will be:

ks=http://churchill/ks.php?sk=1 ksdevice=eth0

That is, use Churchill as a web server to ‘feed’ the new VM the contents of ks.php as a Kickstart file, filtered by the fact that we are using “speed key 1”. Speed key 1 is what we use to define “Alpher”, the first of the Churchill framework servers. We also add a note (space-separated from the rest of the bootstrap line) to tell the O/S installer that the “eth0” network interface is our primary interface (after all, there is now a choice of two to confuse things!)

Note that everything you type on the bootstrap line must be in lower case.

If “sk=1” bothers you as a bit too “code-y”, you can modify the bootstrap line to mention Alpher by name:

ks=http://churchill/ks.php?hostname=alpher ksdevice=eth0

The ‘sk=1’ and ‘hostname=alpher’ parameters are, essentially, identical and it’s up to you which version you prefer to use. Some like to type less; others prefer ‘meaning’ in what they type: Churchill caters to all tastes!

So one or other of those bootstrap variants is your starting point. If you are booting with CentOS 6.8 (and Churchill was built with CentOS 6.8), it’s also your ending point, because then the default O/S Churchill expects will match what you’re actually using. But if you are using another distro or version, you must specify that now. You do so by bolting on suitable “distro=X&version=Y” parameters to the first part of the bootstrap line, before the ‘ksdevice’ bit. For example:

ks=http://churchill/ks.php?sk=1&distro=oel&version=67 ksdevice=eth0

Or:

ks=http://churchill/ks.php?hostname=alpher&distro=sl&version=64 ksdevice=eth0

Or:

ks=http://churchill/ks.php?hostname=alpher&distro=rhel&version=69 ksdevice=eth0

And so on.

Basically, you can choose from centos, sl, oel and rhel as possible distros, and anything from 6.3 to 6.9 as possible versions, stripped of their decimal points (so 6.5 becomes 65 and so on).

You can additionally choose one other characteristic of this framework build: are you going to use the ‘split ownership’ model for the various bits of Oracle software? Or are you instead going to use the ‘unitary ownership’ model? The difference is that in split ownership, someone called “grid” owns the Grid Infrastructure install, whilst someone called “oracle” owns the Oracle database software install. In unitary ownership, someone called “oracle” owns everything.

Practically, your choice of ownership model makes quite a difference: if you want to add a new ASM disk or perform some ASM re-balancing, you’d first have to remember to log on as ‘grid’ instead of ‘oracle’ in the split model, for example. If you were to try patching things in the split ownership environment, you’d find you would fail if you were trying to do it whilst sitting in the /home/oracle directory -because user ‘grid’ has no rights there. Little niggles like that make split ownership a bit more challenging to work with -but it also happens to be the way most production environments in my experience are created, so to that extent it’s the more ‘realistic’ way of doing things.

By default, the Churchill framework assumes that you will operate in split ownership mode. That is, the bootstrap parameter split defaults to a value of y. If you don’t want split ownership, therefore, you must say so by adding an additional &split=n onto your bootstrap line, somewhere before the ‘ksdevice’ bit. For example:

ks=http://churchill/ks.php?hostname=alpher&distro=rhel&version=69&split=n ksdevice=eth0

The ordering of the parameters is not important. You could successfully boot with this, for example:

ks=http://churchill/ks.php?distro=rhel&hostname=alpher&version=69&splitn ksdevice=eth0

When you type your decided-upon bootstrap line, it may be long enough to wrap unpleasantly onto the next line of the screen:

It doesn’t matter if it wraps: just keep typing and do it without adding any spaces (until you get to the ‘ksdevice’ bit which must have a space between it and the rest of the line). Your bootstrap line must be continuous and syntactically correct, but any wrapping that takes place won’t affect those qualities.

5.0 Booting Bethe

Bethe is the second node of your soon-to-be cluster. It’s built identically to Alpher (in terms of memory, disk space, CPUs and networking, etc). It is also booted in a very similar way: as before, you press [Tab] when the boot menu appears and then append a suitable bootstrap line.

Bethe’s bootstrap line will be identical to Alpher’s except for the sk= or hostname= component.

Being the second node, Bethe gets an ‘sk=2’ or ‘hostname=bethe’ bootstrap parameter, but everything else you used to boot Alpher with remains the same.

So, if you booted Alpher with:

ks=http://churchill/ks.php?sk=1&split=n ksdevice=eth0

…then you’d boot Bethe with:

ks=http://churchill/ks.php?sk=2&split=n ksdevice=eth0

If you mentioned distro, version and split parameters when building Alpher, you need to mention them again when building Bethe –and specify exactly the same values for those parameters, too. If you built Alpher with split=n, for example, you cannot build Bethe with split=y (not if you want anything related to Oracle to end up working, anyway!)

Once Alpher and Bethe have constructed themselves, you’ll be asked to reboot each server in turn. When they come back up, you’ll find each is a minimalist, command-line only server with no graphical capabilities. You can log on to each as user root, password oracle. As root, type the commands:

cd /osource
ls -l

On Alpher, you’ll see this sort of output:

[root@alpher ~]# cd /osource
[root@alpher osource]# ls -l
total 8
drwxrwxr-x. 7 oracle oinstall 4096 Jul 8 2014 database
drwxrwxr-x. 7 oracle oinstall 4096 Jul 8 2014 grid
-rwxrwxr-x. 1 oracle oinstall 0 Feb 8 01:49 scripts.zip

On Bethe, you’ll see this instead:

[root@bethe ~]# cd /osource
[root@bethe osource]# ls -l
total 0

This tells you that the Oracle software has been copied onto Alpher but not onto Bethe. That in turn means that you can perform Oracle software installations on Alpher but not Bethe. In a sense, it makes Alpher the ‘prime node’, and Bethe a bit of a subsidiary one -though, in fact, as we saw earlier, they are necessarily built identically and are thus really as capable as each other.

But this is the way we build RACs: the Oracle installer is run on only one node of the cluster -and then, as part of the installation process itself, the necessary software is ‘pushed’ onto other nodes of the cluster in turn. The node you run the Oracle installer on is sometimes referred to as ‘the local node’; every other server is called ‘a remote node’. In the Churchill framework, Alpher is local and Bethe will be the remote node.

6.0 Before Installing Grid Infrastructure…

Once both nodes have built themselves, you are almost ready to perform the first of the two Oracle software installs: the one that provides the ‘glue’ which binds multiple nodes into a single cluster, called the Grid Infrastructure.

We do that by connecting to Alpher from a PC that is capable of displaying Oracle’s GUI software, using a Remote X connection with X forwarding enabled. I discussed how to do this in some detail in the standalone Alpher article. The short version is, if your physical desktop is running Windows, use Mobaxterm; if it’s running Linux, use ssh -X connections.

Who you connect to Alpher as, depends on whether you went for split=n on your bootstrap line whilst building Alpher and Bethe. If you did, then you connect as ‘oracle’; if you didn’t, then you connect as ‘grid’; the password is oracle in both cases.

There is one other preparatory point to mention: if you are building this Alpher/Bethe cluster after having built Alpher as a standalone node as per my earlier article and you are using the same Churchill server as before, then that earlier build of Alpher will have ‘stolen’ all the fake hard disks that Churchill makes available to databases. That would mean that your new installation wouldn’t find suitable storage on which a RAC database could be stored.

You can fix this problem by first logging on as root to your newly-built Alpher server. In the /root directory, you’ll find an asm-initialize.sh shell script. Run that and all the fake hard disks on Churchill will be wiped clean, thus making them available for your new RAC:

[root@alpher ~]# pwd
/root
[root@alpher ~]# ls asm*
asm-initialize.sh
[root@alpher ~]# ./asm-initialize.sh

When you run the script, you’ll see output like this:

[root@alpher ~]# ./asm-initialize.sh 
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00423389 s, 121 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.001968 s, 260 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00172754 s, 296 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00161023 s, 318 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0018454 s, 277 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00177159 s, 289 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00181617 s, 282 kB/s
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00153771 s, 333 kB/s

I wouldn’t worry about the specifics of that output for now; so long as you see 8 sets of ‘512 bytes…copied…’ statements, it’s done its job.

As I say, you only need to worry about running this asm-initialize script if your Churchill server is not freshly-built but has been previously used for another Oracle build. If it is freshly built, then your fake hard disks are already initialized and you needn’t re-initialize them now.

7.0 Installing Grid Infrastructure

To perform the Grid Infrastructure installation, therefore, you now connect to Alpher from your physical desktop like so:

ssh -X grid@192.168.8.101

That’s the ‘public’ IP address which Churchill assigns to Alpher on our behalf. I’m connecting there as the user grid, because I let the split-ownership model apply (it’s the default and I didn’t make a point of switching it off). If you said split=n when building Alpher (and Bethe), you’d connect as the user oracle instead. In all cases, you specify the ‘-X’ switch when connecting so that you can have applications running on Alpher ‘paint’ their graphical screens onto your physical desktop. That way, Alpher can run a graphical software installer despite not having any graphics capabilities of its own!

All the software has already been placed on Alpher for you by the automated build process we saw back in Section 5 above. All you have to do, therefore, is invoke it as follows:

/osource/grid/runInstaller

Alpher, and your desktop, will respond accordingly:

You’ll get some messages displayed in your original terminal session as Oracle checks that your system can display its wares, then you’ll see the first screen of the Grid Infrastructure installation wizard appear. For the most part, your job is now to click [Next] quite a lot of times!

Here’s a walk-through of the complete wizard:

One of the critical points to highlight in that lot is the need for the cluster and SCAN names to be set to racscan. You have no choice in that in the Churchill framework -since the SCAN name in particular has to be resolvable via DNS lookup. The automation bits of Churchill mean that’s true for the name ‘racscan’, but for no other.

Additionally, make sure to change the ‘disk discovery path’ to /u01/app/oradata when prompted: that’s the location Churchill uses to ‘publish’ the existence of its fake ASM hardware. When selecting disks to use for the database, I’ve suggested you only pick 5 of the 8 disks. There’s no absolute reason why you can’t choose more or fewer -but 5 definitely provides the space needed for a 12c database and  gives you three ‘spares’ to add to the database later (which is a skill always worth practising).

8.0 Installing Database Software

Once your Grid Infrastructure installation is complete, close down the wizard and log out as grid and log back on from your physical PC to Alpher as the user oracle instead (assuming you are using the split ownership model; if not, then you will already be logged on as oracle in any case!). The oracle user’s password is (surprise!): oracle. Since you’re going to be running a graphical software installer, your connection to Alpher needs to be made with the “-X” switch, as we used before for the grid user’s connection:

ssh -X oracle@192.168.8.101

You then launch the database software installation by typing the command:

/osource/database/runInstaller

Here’s me doing exactly that, my remote-X-enabled connection in the background, the Oracle software response in the foreground:

As usual, for the most part, navigating your way through this wizard consists of clicking [Next] many times, but here’s a walk-through of all the screens you’ll meet:

It’s important during that process that when the ‘run root scripts’ prompt pops up, you make fresh connections as root to each of the cluster nodes in turn. You’re only running a command-line program, so you don’t need GUI capabilities. Therefore, something like:

ssh root@192.168.8.101

and

ssh root@192.168.8.102

…are the commands you are after. Run the script shown in the Oracle pop-up box and when prompted for the location of the local bin directory, just press [Enter] to accept its default suggestion.

Make sure you run the root script on both nodes, one at a time, doing Alpher first and Bethe second.

9.0 Some Post-Install Work

Congratulations are in order at this point: you have a cluster of virtual machines and you’ve built a RAC database on top of it. But there are a couple of minor things to tweak before you declare the job completely finished.

9.1 Correct the ORACLE_SID

For a start, make a fresh connect to Alpher from your physical PC as oracle and try to connect to the database:

oracle@alpher:~ [orcl]$ sql

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 9 10:45:45 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

Notice I’m using the command sql here; that’s an alias for the proper command, which is “sqlplus / as sysdba”. It’s just quicker to type the short-form alias, though -something that Churchill sets up for you. If you prefer not to use the alias version of the command, the full ‘sqlplus’ command can be used as you like.

But that’s not the issue here. The problem that set of commands and responses reveals is that you’ve just ‘connected to an idle instance’ -and the instance definitely isn’t idle, because you’ve only just created it! Something is wrong.

And the thing which is wrong is this:

SQL> exit
Disconnected
oracle@alpher:~ [orcl]$ export ORACLE_SID=orcl1
oracle@alpher:~ [orcl1]$ sql

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 9 10:48:20 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>

See how I exit out of the original SQL*Plus session, set my ORACLE_SID to ‘orcl1’ and then re-launch SQL*Plus: this time it connects to something and does NOT say it’s ‘idle’.

The issue, in other words, is simply that Churchill sets your initial ORACLE_SID to a generic ‘orcl’, rather than to an ‘orcl1’ (or, as in Bethe’s case, ‘orcl2’). We need to correct that for future use, so as the oracle user on each node in turn, issue these commands:

cd
nano .bashrc

In the file that is now opened, find these lines:

# -------------------------------------------------
# Added for fresh Oracle Installation - Oracle User
# -------------------------------------------------
export ENVTYPE=NONPROD
export ORACLE_HOSTNAME=$(hostname)
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_SID=orcl
export ORACLE_UNQNAME=orcl
export PATH=$ORACLE_HOME/bin:$PATH:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

…and change them as follows:

# -------------------------------------------------
# Added for fresh Oracle Installation - Oracle User
# -------------------------------------------------
export ENVTYPE=NONPROD
export ORACLE_HOSTNAME=$(hostname)
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_SID=orcl1
export ORACLE_UNQNAME=orcl
export PATH=$ORACLE_HOME/bin:$PATH:.
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

Don’t alter the ORACLE_UNQNAME -that’s correct without a numeric suffix. But the ORACLE_SID needs to say “orcl1” on Alpher, and “orcl2” on Bethe. Setting them correctly just means that in future, you can log on as the oracle user and connect to your instance without first having to set ORACLE_SID manually.

Whilst we’re here, we could also make sure that the grid user can do the same thing to his ASM instance. Connect to Alpher and Bethe in turn as the grid user and make the same sort of alteration:

ssh grid@192.168.8.101   (and subsequently, @192.168.8.102 for Bethe)
nano .bashrc

Change:

export ORACLE_SID=+ASM

…to…

export ORACLE_SID=+ASM1  (and to +ASM2 on Bethe)

9.2 Prettify SQL*Plus

The last bit of post-install tidying up I’d consider doing (though it’s entirely optional) is a consequence of this:

That’s me doing a perfectly ordinary select * from scott.emp in a nice large terminal window, connected as the oracle user. Notice how the results only appear in the left-hand part of the window; that the right-hand part of the window is just wasted space; that the column headings appear multiple times, breaking up the data; and that each row of data wraps around to take up two lines (with the DEPTNO column being forced onto the second line each time).

That all happens because Oracle’s default settings for SQL*Plus are mind-bending-ly bonkers. Therefore, Churchill has arranged for you to improve them!

As the oracle user, quit SQL*Plus and issue this command:

/home/oracle/Documents/churchill-postinstall.sh

Now re-launch SQL*Plus as before and try that query again:

That’s the same sized terminal as before, but this time, the results of the query are not wrapped, and not interspersed with multiple column names and rows don’t break over two lines.

It’s just a minor thing -and you are not obliged to run this postinstall shell script if you prefer not to, but the script is there to set more sensible defaults for line- and page-length if you’d like.

9.3 Proving you are a clustered database

One final thing I like to do at this point is to prove that I’m actually running on a multi-instance RAC database -because there’s nothing very obvious in any previous screenshots to say that you are, I think! So here’s how I test that:

Run that query of GV$INSTANCE on any ‘normal’ Oracle database and you’ll always get a count of 1. If you ever get a count greater than that, it means your database is currently opened by more than 1 instance at a time -and that is only true in (and indeed is the very definition of) a clustered database environment.

Another fun thing to try:

So I’ve connected to Alpher, run SQL*Plus, proved I’m connected to the “orcl1” instance and create a little table there, populating it with a single row containing some text.

Now I connect to Bethe:

I prove I’m now connected to the “orcl2” instance -and yet the “ractest” table is there and query-able …and, indeed, the very same text I just inserted on the other node is visible. Proof that what you do on one instance is visible in the other instance -because both are managing the one database. That again is the very definition of a RAC.

10. Administration via GUI

I like administering my RACs via SQL*Plus, but some people prefer to do it with GUI (or, strictly speaking, web-based) administration tools. Oracle provides one such tool, out of the box, called “Database Express” (or DB Express). You were told about it at the end of the earlier database creation process:

That pop-up told you that the Database Express URL is “https://racscan:5500/em”. That’s fine as far as it goes -and “racscan” is the name of the cluster which Churchill, Alpher and Bethe can resolve just fine. Chances are, however, that your physical PC (where you run browsers such as Chrome or Firefox) cannot resolve that name -so you need to know what the IP is that the name ‘racscan’ resolves to.

That is easily done connected to either Alpher or Bethe or (indeed) Churchill:

oracle@alpher:~ [orcl1]$ nslookup racscan
Server: 192.168.8.250
Address: 192.168.8.250#53

Name: racscan.dizwell.home
Address: 192.168.8.202
Name: racscan.dizwell.home
Address: 192.168.8.200
Name: racscan.dizwell.home
Address: 192.168.8.201

That tells you that, in fact, the ‘racscan’ name resolves to three different IP addresses! This is Oracle’s way of load-balancing and making RAC connections highly-available: the ‘thing’ called ‘racscan’ is actually a “Scan Listener” and is accessible from three different IP addresses, any one of which can be running on any of the cluster nodes. Should Alpher die, therefore, there would still be scan addresses available on Bethe -and therefore you can still connect to some part of your cluster, even if the other part collapses in a heap.

Anyway, for GUI administration purposes, it means that you need to open a browser on your physical PC and type the URL:  https://192.168.8.200:5500/em (I’m just using the first of the three IP addresses; the point is, it doesn’t matter which of the three you decide to use. Each will connect to the same database as the others):

Don’t forget, either, that you’re connecting to an https address, not an http one: Oracle uses encryption on its administration tools, not plain-text communications.

Unfortunately, as you can see: it doesn’t help. Browsers these days do not like Oracle’s habit of self-signing its encryption certificates, so you are warned that your connection won’t be private. That doesn’t really matter in a self-learning environment, so click that ‘Advanced’ option:

That gives you an option to proceed to the site regardless. In Firefox, the dialogs are a bit different, but the same basic process is followed: click [Advanced], add a security exception, confirm a security exception.

At this point, you may unfortunately see this:

…which tells you that the DB Express application is heavily-dependent on Adobe’s Flash plugin. I won’t document here how you go about installing Flash into your browser if you are prompted like this to do so: the options vary so much depending on the O/S you’re using and, indeed, the specific browser in use.

Let’s assume you do manage to get Flash installed, however. This is what you’d eventually see once you get through the security warnings:

You’re logging on to your database at this point, so you need to use one of those administrative accounts whose passwords you set during the database software install. I’m using the SYS account -and that means I have to check that ‘as sysdba’ option. You could also use the SYSTEM account, in which case that option does not need to be checked. Click the [Login] button when you’re ready:

…and now you have access to a rich, graphical environment in which to do clustered database administration. I won’t detail how you use this tool here. All I will say is that there’s a lot you can do with the tool -and most of the fun starts by poking around inside the menu options along the top of the screen: Configuration lets you configure the database’s memory and initialization parameter values. Storage lets you create and resize tablespaces. Security lets you create new users or drop existing ones, change their passwords and alter their privilege profile. Perhaps the most useful menu of all, however, is the Performance one, which really gives you insight on what is running inside the database and how it could be tuned to run better.

11.0 Conclusion

In this article, I’ve shown you how to build a pair of new server using automated Kickstart technology which leverages the fact that Churchill exists and is running in the background. With just a couple of boot-time commands (“sk=1 ksdevice=eth0” and “sk=2 ksdevice=eth0”), all the correct bits and pieces are seamlessly stitched together to create Alpher and Bethe that together can work successfully as a cluster.

I’ve shown you how to perform a Grid Infrastructure install to stitch Alpher and Bethe together into a single cluster (called ‘racscan’), and then a Database software install that creates a demonstrably clustered database, managed by two distinct instances.

And I finished off by showing you how you can use various command-line and web-based tools to interact with your new database and begin to manage it.

For the rest, it’s down to you: you have a database and the tools to administer it. Using SQL*Plus, ASMCMD, DB Express and similar tools, you can now learn for yourself how to interact with and administer an ASM-based Oracle RAC database. Good luck playing around with your new clustered infrastructure!