Debian 6 (“Squeeze”) only uses two virtual desktops by default -which means, if you enable the Desktop Cube, you actually end up enabling the Desktop Plane!
In the old days, you’d solve this problem by right-clicking the virtual desktops applet in the bottom panel, click Preferences, and change the number of virtual desktops there (to four, if you want a proper cube back!):
As you can see here, however, you now only get an option to adjust the number of rows the virtual desktops are displayed in, which isn’t the same thing at all, and which has no effect on the number of virtual desktops which exist in the first place. So that option’s not on these days. Where else can you do the deed, then?
Well, if you want desktop cubes, you are presumably running Compiz desktop effects. And, if you have any sense, that means you’ll have installed the CompizConfig Settings Manager (System → Preferences → CompizConfig Settings Manager menu options if so). At the very top of that application is an item for General Options -and that, in turn, has a tab for Desktop Size. Click that, and bump the Horizontal Virtual Size up to 4:
Not, I think, that anyone’s really paying attention, but I’m back from London, having had a great time there. Must have walked about 60 miles in the 10 days; fell back in love with The Tube; was present to see a High Court judge sworn in; enjoyed a Latin mass at the Brompton Oratory and a lovely Evensong at Westminster Abbey (I try to be ecumenical in all things!); had a wonderful time with my extended family after all this time; and much, much more.
I mention it now only because I’ve just deleted my Twitter and Facebook accounts (two of the biggest, most pointless wastes of time on the planet, I think!), and I wouldn’t want anyone thinking I was dead or something!
Have a few tourist memories on me, anyway:
(Battersea Power Station. Still un-re-developed after all these years!)
(St. Paul’s Cathedral main entrance. Interestingly processed by my ‘panorama stitching’ software!)
(King’s Cross Station, with St. Pancras Hotel in background)
For the first time in 16 years (nearly 17, in fact), I am travelling back to London for a 10-day stay. I shall be doing lots of tourist stuff in London, with side-trips to Dublin and my old home town of Gillingham (plus Chatham/Rochester).
I’m looking forward to it very much -hope the cats and wallabies won’t mind putting up with the house-guest animal-sitters we’ve arranged for the interim, though!
The flight leaves in about 10 hours (doesn’t time drag when you really don’t want it to!), and I’ll be touching down in London at around 7AM (GMT) on October 1st. I won’t see these sunny shores again for a further three weeks after that.
I will confess that I’ve tended to stick with ye olde exp and imp (export and import) utilities because I know them reasonably well and they do the job. However, the writing has been on the wall for both of them for quite a few years and you’re supposed to have been progressively switching over to using the all-new, all-dancing expdp and impdp replacements. For the stuff I use these utilities for, however, I’ve managed not to need the new versions… until today. We recently upgraded all our databases to 11.2 from 10.2.0.4 -except one, which got converted to 11.1 months ago- and when you try to do a traditional export against the 11.1 database, it terminates with an error:
Hunting around the subject, it seems there’s a bug in late versions of exp that make this to be expected, which is a bit of a show-stopper. However, those same bug reports all unanimously declare that the new data pump version of export does not suffer from the same problem. So, finally, it’s time to switch to using the new “dp” utilities after all!
The trouble is, however, that we’ve used the traditional utilities to pull production data over to development environments. Whilst logged onto the dev server, for example, I’d do something like:
…and the export file, called “prod_exp.dmp” would be produced in whatever directory I happened to be sitting in when I typed the command, on the dev server. Production data was thus dumped out to a dev server directory, thanks to the use of a tns alias in the connection string bit of the command. Put another way: the export utility runs locally on my development server, but knows to connect to my production database to fetch its data.
Unfortunately, this is not how the Data Pump utilities work. They always run on the server and never on the client. Which means that, other things being equal, the output file is always on the server, too -even if the utility is invoked remotely!
So, for example, you could issue this command whilst logged into the development server:
…and the thing would work just fine. But when you then navigate the file system of your development server, you will not be able to find a file called “prod_exp.dmp” anywhere! And that’s because the file has been written on the production server, in whatever location the “dumpdir” directory object is pointing to on that server.
You can invoke data pump export remotely, in other words, but all the action and all the output will only ever happen on the remote database. If you want the dumpfile to be sitting on your development server’s hard disk at the end of the exercise, this sort of syntax won’t achieve that, I’m afraid!
But this, happily, will:
expdp "sys/password as sysdba" network_link=prodserver directory=dumpdir file=prod_exp.dmp full=y
With this syntax, typed on the dev box, you’re now connecting locally -that is, to the development database. That therefore means you’re going to be outputting to whatever part of the development server’s file system is referenced by the dev box’s “dumpdir” directory object. But, the new parameter network_link means that the development database doing all the work will know to connect to whatever database is at the end of the “prodserver” database link in order to get its data (so, obviously, for this syntax to work, I must first have issued a command such as create database link prodserver connect to system identified by password using ‘dbprod’). This syntax -a local connection string, but an extra network_link parameter- therefore achieves what the old export command did: production data “pulled” across to my development server and ending up as a dump file on my dev server’s hard disk.
In a word, the trick is not to invoke the data pump utility with an old-fashioned “@somewhere” to achieve a ‘remote connection’, but to use one of the new parameters the new utility makes available to you to have the data ‘pumped’ from a remote location. Easy -though not perhaps entirely obvious!
When I have to use Windows 7, the first thing that really puts me off is the dolls-house-y login screen. It’s so clutzy as to be annoying, and I didn’t realise until today that it can be gotten rid of. Which makes this an essential thing to do on any Windows 7 system you are unfortunate enough to have to use. You can do it by poking around in the registry, or by running this svelte utility.
It’s a trivial one, really, but if you disable Fedora’s Network Manager in order to be able to create KVM’s bridged network connections as mentioned in the last post, you will probably find that every time you now launch Firefox, it will start in ‘Work Offline’ mode. This is a pain if you want it to automatically re-open tabs you had open when you last shut down!
The fix is quite simple, however. Type about:config in Firefox’s URL bar and agree to be careful! Filter for the word toolkit. Find the item called toolkit.networkmanager.disable and double-click it so that it becomes set to ‘true’. Shutdown Firefox and then re-start it: this time, it will correctly identify that you are not working in offline mode.
No matter how much I’d rather not run Windows, there are times I have to -principally because work insists on using Checkpoint’s VPN software for which no Linux client exists. So, when I want to work from home, I have to connect to the office in a Windows 7 VM and use tools like Putty or NX Client to manage the various work PCs and servers (all of which are now, ironically enough, Linux boxes). It’s a pain, and if anyone knows how to use openssl or openvpn to connect to a Checkpoint VPN1 SecuRemote VPN, I’d love to be let in on the secret!
Anyway, a Windows VM is essential -and for years I’ve been using VMware Workstation to run one. I paid my US$189 several years ago (interesting to see that price hasn’t budged a cent since!), and I’ve always found it just a fraction more intuitive and well-behaved than, say, Parallels or VirtualBox. VirtualBox has the distinct advantage of being free, of course -and is now owned by Oracle, which seems to be continuing development efforts quite nicely. But the fact remains, I’ve never really warmed to it: I’m just a VMware Workstation fanboy, I guess! (I stress the Workstation in that product name, however: I’ve never liked the zero-cost VMware Server product, since it seems to require klunky web-based interfaces to achieve anything much. On the other hand, I got VMware’s ESXi bare metal virtualisation installed at work and it’s never missed a beat, running all of our Oracle dev and test environments extremely well. (Though I will point out the irony that ESXi lacks a native Linux client and I am therefore forced to use a VMware Workstation VM running Windows 7 on my Linux-running Work PC just so I can manage the ESXi box, which is running a Linux kernel! Go figure!!)
Anyway, I have dabbled in various virtualization technologies in my time, both hypervisors and host-based ones. Citrix Xen Server, for example, was a good hypervisor, but a little inflexible to manage as compared to VMware’s ESXi similar offering. Microsoft’s Hyper-V was certainly slick, but I had terrible performance issues in the presence of an Nvidia graphics card -and I wasn’t the only one. See, for example, this page of complaints. It’s been a year since I ran any Windows OS natively, either at home or at work, so I’ve not tried Hyper-V since -but according to this Wikipedia article -see the Graphics issues on the host paragraph-, the graphics problems persist (but who trusts Wikipedia?!). Funnily enough, using the Xen virtualization features in Red Hat Enterprise Linux 5.5 is very similar to using Hyper-V: both installations slot ‘underneath’ your physical host’s OS install, turning it, effectively, into a virtualized guest (albeit a “parent” one). The moment Xen goes in, for example, a uname -a command in a terminal will reveal that you’re no longer running a standard linux kernel, but a special “xenified” one (which poses all sorts of problems when you are running proprietary graphics drivers which expect only ever to have to compile against ‘standard’ kernels, for example).
But there’s been one virtualization technology I’ve not used before now: KVM (stands for ‘kernel-based virtual machine’, not ‘keyboard, video, mouse’ as in a KVM switch!). As it’s name suggests, it’s built into the Linux kernel -and has thus been shipping as a standard part of Red Hat Enterprise Linux since 5.4 days (around about this time last year, basically). Fedora 13, too, includes KVM ‘out of the box’ (as do a lot of other distros, including Ubuntu). It’s not installed or enabled by default, but it’s right there, in the repositories, just waiting for a simple one-line installation command. What’s more, when you do install those KVM packages, unlike when you install Xen, you don’t end up altering the host OS’s status: uname -a still outputs exactly the same as it always did, in other words. This is simply because (the clue is in the name!) the hypervisor is already built into your existing kernel, so you don’t need a special kernel to make use of it. Not disturbing the host’s kernel in this way makes installing things like Nvidia graphics cards (see posts passim!) not a drama, and is thus a Very Good Thing™.
Installing KVM on Fedora 13 is simple:
su - root
yum install qemu-kvm virt-manager virt-viewer python-virtinst
Once the libvirtd daemon is running, you can fire up Applications→System Tools→Virtual Machine Manager. Click the ‘new virtual machine’ icon in the top-left and then, basically, follow the prompts of the ensuing wizard to build your first virtual machine. And that’s about it! It’s really incredibly simple.
The only tricky bit comes if you want your new VM to look like an independent host on your network. That requires “bridged” networking, which doesn’t exist until you manually create it (it would be nice if someone was to develop a graphical tool for achieving this!) Worse, bridged connections don’t work with the fancy new ‘network manager’ way of doing networking that Fedora (and Ubuntu, actually) has adopted. So, if you want bridged connections for your VMs on those distros, here’s what you have to do:
As root, issue the command
Find the eth0 item and click the Edit button. Switch ‘Controlled by NetworkManager’ off, ‘Activate device when computer starts’ on and ‘Allow all users to enable and disable the device’ to on. Click OK and then File→Save to preserve the changes.
Now you’ve just disabled the new-fangled Network Manager, so you have to make sure the old-fashioned network control starts at each reboot:
chkconfig network on
You now create a new bridge network interface by issuing the command:
Add the following lines to the new text file thus created:
The typing here has to be precise -it’s very case-sensitive, for example, so ‘bridge’ as a “type” entry won’t work, where ‘Bridge’ will!
You now tell the eth0 interface that it is to be bridged. Do that by issuing the command:
Add the following line to the file’s existing contents:
Now you can re-start the network so the new configuration is activated:
service network restart
Note that your physical PC now connects to the rest of the world via the br0 interface, which happens to know (thanks to the edits above) that the physical eth0 is responsible for handling its traffic. But, as far as your physical PC is concerned, eth0 is actually a non-active interface in its own right. Br0 takes over that role, though functionally it all amounts to the same thing.
Finally, the trouble with this setup is that br0 is a physical network interface, seen and used by your physical PC. But that’s not much use to a virtual guest machine! So now we have to add a virtual interface to our physical interface -and that’s a job for a utility called tunctl. That utility probably needs to be installed to start with, so the relevant command is:
yum install tunctl
Next, issue these commands in sequence:
tunctl -t tap0
brctl addif br0 tap0
The first command creates an interface called “tap0″; the second command says it’s to be a virtual representation of the ‘br0′ physical network interface.
Once all that’s done, you can go back to virtual machines you’ve already created and add new network hardware -this time, a bridged interface will be available to you. You can remove the previous NAT one, if you like (or simply disable it within the guest OS). New guests can be created, obviously, that use the right sort of ‘let me at the world!’ interface from the get-go.
One final bit of advice as far as KVM experiments are concerned: having to start libvirtd manually before you begin is a bit of a pain. If you want to ensure libvirtd is started automatically whenever your PC reboots (and thus avoid the need to run it manually in a terminal session), just go to System→Administration→Services and click the libvirtd item, then the [Enable] button. Once it has a green check mark next to it, it’s scheduled to auto-start.
Apart from the bridged network issue, however, KVM is an absolute doddle to install, configure and run. Performance in the Windows 7 virtual machine I use is excellent -the only drawback is that the virtualized graphics hardware isn’t up to displaying the fancy, semi-transparent Aero interface. But that’s not much of a problem for me. I miss only two other things from my VMware Workstation days: movie capture and snapshots. KVM provides a menu option to take a still screen capture of your guest, which is fine. But it doesn’t have the option to capture screen motion/activity as a movie (this is something the freebie VMware Server product also lacks). There are workarounds, of course (yum install recordmydesktop puts a movie-capturing application at your disposal which will more-or-less do the job), but it would be nice to have the functionality built-in.
The lack of snapshots is a bit more of a drama, to be honest. There are snapshot capabilities that can (probably!) be used, thanks to the use of the qcow2 virtual hard disk format -and you’re supposed to be able to drop into a terminal and issue a qemu-img command that will do the necessary. But I haven’t tried it, I believe it only works for a VM that’s been shut down… and in any case, it all sounds a bit tricky at this stage. I’m really more after a ‘take snapshot’ button in the Virtual Manager window, to be honest! Meanwhile, there is a simple button to do VM cloning (though, again, the VM has to be shut down for the duration), which will do me well enough in the meantime. But this is certainly an area of VM management that it would be nice to see some development on in the next year or two!
Other than those slight niggles (oh, there’s one more: no drag-and-drop between host and guest), I think KVM is an excellent virtualization platform, and my trusty copy of VMware Workstation has remained firmly on the bookshelf for this PC’s recent rebuild.
Installing “proper” ATI drivers is impossible, because ATI don’t support the xorg version used by Fedora
Because K3B is such a good CD ripper (and burner), I investigate KDE-based distros, but can’t stand any of them for long. Discover, however, that living with a mix of KDE and Gnome apps isn’t actually a bad thing but rather gives you the best of both worlds.
CD ripping resolved, therefore, by running a Gnome distro but with some KDE apps installed (like K3B). Still leaves the Stellarium/ATI Graphics problem…
So I install OpenSuse 11.3 -and hate every moment of it! Stellarium works and the ATI drivers install, but the distro sucks in lots of little ways (in addition to the litany mentioned last time, I should add that discovering sshd is not enabled by default was a bit of a surprise!)
After two days with OpenSuse, I reverted to Ubuntu 10.04: Stellarium worked, K3B still does CD ripping. But I’m still bored with Ubuntu. Worse, I find some of its control changes now annoying: I want my Log Out option to be under my System menu, thanks all the same, not a little button tucked over the right-hand side of the top panel. Also, I know I can switch the windows close/minimise/maximise buttons back to the right-hand side of the window, but I don’t see why I should have to -and I know that the developers are cooking things up for 10.10 that will expect the controls to be on the left, where they put them without much consultation. All a bit Microsoft-ish if you ask me.
So, having been through just about every vaguely-plausible distro and desktop environment out there and received only disappointment for his pains, what’s a boy to do??
Buy an Nvidia graphics card is the answer!
Actually, I happened to have one sitting around in a cupboard, so I whipped out the ATI monster and slipped it into the PCI Express slot in its stead. It sounds a tad drastic but it means I’ve been able to re-install Fedora, have no graphics problems, Stellarium works, K3B works, menus are where I expect them to be, ditto windows controls …and it all behaves very like a Red Hat Enterprise distro, so I feel at home at work, if you get my drift.
Everything is thus tickety-boo …if you overlook the minor matter of having to trash more than $300-worth of ATI graphics card to get there. I have said it before, but it bears mentioning once again: ATI (i.e., AMD) should be ashamed of themselves. The quality of their Linux drivers, the convoluted installation process and the tendency of any system fitted with them to crash or otherwise have “interesting” graphical glitches happen at random moments -it all adds up to an abysmal way of doing business. I won’t say that Nvidia are completely blameless (their installation procedure isn’t exactly brilliant -on Fedora, at least, you have to manually disable the open source drivers by editing the grub.conf file before the installation will succeed, which isn’t what I’d call terribly user-friendly), but they make ATI look like a bunch of incompetent amateurs by comparison.
Funnily enough, my PC at work -which I try to keep more-or-less in synch, distro-wise, with my home PC- experienced exactly the same grief with ATI drivers, even though it was running a copy of Centos 5.5 (which is a clone of Red Hat Enterprise Linux, which ATI claims to be a fully-supported distro). I gave in there, too, and did actually go out and buy a $50 Nvidia Geforce 8400GT… which also immediately made all my graphical and stability problems melt away. So it seems to me to be a generic “feature” of ATI cards that they screw up most Linux distros!
I never had that problem with Ubuntu with the ATI card, I will admit -but then I never tried to install my own ATI drivers in that distro, either. Clicking the ‘activate proprietary drivers’ button is all it takes in Ubuntu (and is all it really ought to take anywhere else, Nvidia and ATI included), but I have no idea which drivers it actually causes to be installed. Had such an ‘automated installation’ feature been available in Fedora, I guess none of this saga would have arisen -but ATI haven’t exactly come to the party in terms of supporting Fedora nearly six months after its release, so I still say it’s more ATI’s fault than anyone else’s.
Anyway, I’m a happy Fedora man again -and discovering the joys of KVM virtualisation for the first time (very impressive, is the short version). And if anyone wants a $300 ATI graphics card, feel free to ask.
OpenSuse 11.3, I mean. It’s quite possibly the nastiest distro I’ve used in a very, very long time. Let me count the ways!
The default ‘start’ menu is horrible. Novell in its wisdom decided that the standard Gnome Apps/Places/System menu is not good enough for their distro and thus replaced it with something that more resembles the giant thing you get in Vista/Windows 7 when you click the ‘Start Orb’. It’s also at the bottom of the screen, not the top. Clearly, a lot of design thought has gone into this change to ‘standard’ Gnome layouts -but I hate it. Happily, by adding a new panel here, adding ‘Main Menu’ items to those panels there, and generally buggering about for long enough, you can get things back to the way Gnome usually is -but it’s effort that shouldn’t be required.
Assuming you’ve added back the traditional “Applications/Places/System” Gnome menu, you may think you’re on the home straight. But alas, the menu structure revealed ‘underneath’ those three menu headings is completely non-standard and utterly bizarre. When you install the VLC media player, for example, in every other distro I’ve seen, it gets added as an item under ‘Audio/Video’ or ‘Multimedia’ off the main Applications menu. Not in OpenSuse, however. There, it gets added as an item under another menu, so you end up having to click Applications → Multimedia → Video Player → VLC. Similarly, Handbrake doesn’t appear as its own item, but gets rolled onto a new ‘Media Editing’ submenu. I hate extra mouse clicks for no reason, and that’s two of them too many! I won’t even get into the business of why one menu sports a noun (“Video Player“) and one a present participle (“Video Editing” …why not “video editor”?). The same sort of thing happens under the Games menu: we get “Board Games” and “Card Games”, which is all well and good… but then an item called “Puzzle”. Not even plural puzzles, note. Let alone “Puzzle Games”. Trivia, I suppose, but annoying all the same: a bit of grammatical consistency wouldn’t go amiss.
How many different ways are there to skin a cat? OpenSuse lets you install software at the command line with Zypper. Then there’s System → System → Install/Remove Software (and I just love the double-up on ‘System’ in the menu structure at this point!) But there’s also System → System → Yast → Software → Software Management. And, just in case you didn’t think that was enough, there’s Applications → System → Yast → Software → Software Management, too. How many menus called “System” do you need in, er, a system, anyway? (It makes writing directions/guides a pain in the neck, if you really wanted to know). And how many menu items pointing to Yast is overkill? Whatever the answer to that, OpenSuse has too many. One more example, then: to update your system, you could do System → System → Software Update. Or you can do Applications → System → Configuration → Software Update. Exactly the same option in two completely different places! OpenSuse basically renders the System menu completely pointless, in fact.
Chromium is broken. I don’t know if this is an OpenSuse thing or a Google thing: I’ve seen reports of it mentioning Ubuntu, for example. But it was all working just fine for me in Fedora. The problem is the Sync tool that allows you to have one set of bookmarks, themes, extensions, preferences and autofill details shared amongst all the desktops on all the PCs you happen to have installed Chrome onto. It’s a great feature -and it’s broken in OpenSuse. The thing authenticates well enough. Then it asks you which bits of data you want to sync. And then it sits there, rotating its hourglass-equivalent thing for ever and ever. It’s bug 51829, if you’re interested.
ATI graphics drivers work. Eventually. Sort of. One of my major issues with Fedora is that there are no official ATI graphics drivers available for it, because Fedora uses a very recent xorg version (as I mentioned last time). The good news is that ATI drivers are available for OpenSuse. The bad news is that their installation procedure is Byzantine, prone to failure (resulting in no X session at all, but unceremonious dumping at a command line), and liable to break at the drop of the hat. This morning, for example, I booted a VMware virtual machine that had virtual accelerated graphics and got a warning saying the drivers had crashed and would therefore be disabled for the duration. It was only a virtual machine affected, and it’s probably ATI’s fault, not OpenSuse’s, but it’s the sort of thing that leaves a nasty taste in the mouth. Or, again, take the fact that as I’m writing this post, my cursor has simply disappeared. Only to re-appear at a time and place of its choosing. Graphical weirdness like that I can do without, frankly. When the drivers are installed, however, I will admit: Stellarium displays and functions flawlessly, which is more than can be said of what’s possible on Fedora.
It all looks a bit weird. Yup, I agree that one’s a bit vague… but it’s the best I can do! The whole thing looks a bit ‘spidery’ for my tastes: the menu fonts are a bit thin and weedy, for example. In fairness, it could be said that the fonts were ‘precise’ and ‘sharp’… but they just look a bit thin and weedy to me!
Well, I could go on, but I don’t think I need to. It’s not that OpenSuse is a bad distro, you understand. Just that it’s peculiarly different in lots of niggly little ways from ‘standard’ distros -and I can’t see any real justification for the departures decided upon by the developers. Aside from the fiddly, niggly differences, there are quite a lot of just plain badly thought-out things (like the bazillion different ways to launch the same program) that really annoy me. I can tell I’m never really going to feel entirely at home with it, to be frank… so two days after installing it, it has to go.
Which leaves me in a bit of a quandary, I guess. With Fedora I can have sensible, default Gnome with a Stellarium that won’t work at all and a CD ripper that’s fundamentally broken. With Ubuntu, I can have everything apparently working, but in a “kiddies distro” kind of way. Or I can endure the peculiarities of OpenSuse and have an adult system with a broken menu structure, no Chrome synchronisation but a functional CD ripper and Stellarium. As they say, Linux certainly gives you lots of choices!