No matter how much I’d rather not run Windows, there are times I have to -principally because work insists on using Checkpoint’s VPN software for which no Linux client exists. So, when I want to work from home, I have to connect to the office in a Windows 7 VM and use tools like Putty or NX Client to manage the various work PCs and servers (all of which are now, ironically enough, Linux boxes). It’s a pain, and if anyone knows how to use openssl or openvpn to connect to a Checkpoint VPN1 SecuRemote VPN, I’d love to be let in on the secret!
Anyway, a Windows VM is essential -and for years I’ve been using VMware Workstation to run one. I paid my US$189 several years ago (interesting to see that price hasn’t budged a cent since!), and I’ve always found it just a fraction more intuitive and well-behaved than, say, Parallels or VirtualBox. VirtualBox has the distinct advantage of being free, of course -and is now owned by Oracle, which seems to be continuing development efforts quite nicely. But the fact remains, I’ve never really warmed to it: I’m just a VMware Workstation fanboy, I guess! (I stress the Workstation in that product name, however: I’ve never liked the zero-cost VMware Server product, since it seems to require klunky web-based interfaces to achieve anything much. On the other hand, I got VMware’s ESXi bare metal virtualisation installed at work and it’s never missed a beat, running all of our Oracle dev and test environments extremely well. (Though I will point out the irony that ESXi lacks a native Linux client and I am therefore forced to use a VMware Workstation VM running Windows 7 on my Linux-running Work PC just so I can manage the ESXi box, which is running a Linux kernel! Go figure!!)
Anyway, I have dabbled in various virtualization technologies in my time, both hypervisors and host-based ones. Citrix Xen Server, for example, was a good hypervisor, but a little inflexible to manage as compared to VMware’s ESXi similar offering. Microsoft’s Hyper-V was certainly slick, but I had terrible performance issues in the presence of an Nvidia graphics card -and I wasn’t the only one. See, for example, this page of complaints. It’s been a year since I ran any Windows OS natively, either at home or at work, so I’ve not tried Hyper-V since -but according to this Wikipedia article -see the Graphics issues on the host paragraph-, the graphics problems persist (but who trusts Wikipedia?!). Funnily enough, using the Xen virtualization features in Red Hat Enterprise Linux 5.5 is very similar to using Hyper-V: both installations slot ‘underneath’ your physical host’s OS install, turning it, effectively, into a virtualized guest (albeit a “parent” one). The moment Xen goes in, for example, a uname -a command in a terminal will reveal that you’re no longer running a standard linux kernel, but a special “xenified” one (which poses all sorts of problems when you are running proprietary graphics drivers which expect only ever to have to compile against ‘standard’ kernels, for example).
But there’s been one virtualization technology I’ve not used before now: KVM (stands for ‘kernel-based virtual machine’, not ‘keyboard, video, mouse’ as in a KVM switch!). As it’s name suggests, it’s built into the Linux kernel -and has thus been shipping as a standard part of Red Hat Enterprise Linux since 5.4 days (around about this time last year, basically). Fedora 13, too, includes KVM ‘out of the box’ (as do a lot of other distros, including Ubuntu). It’s not installed or enabled by default, but it’s right there, in the repositories, just waiting for a simple one-line installation command. What’s more, when you do install those KVM packages, unlike when you install Xen, you don’t end up altering the host OS’s status: uname -a still outputs exactly the same as it always did, in other words. This is simply because (the clue is in the name!) the hypervisor is already built into your existing kernel, so you don’t need a special kernel to make use of it. Not disturbing the host’s kernel in this way makes installing things like Nvidia graphics cards (see posts passim!) not a drama, and is thus a Very Good Thing™.
Installing KVM on Fedora 13 is simple:
su - root
yum install qemu-kvm virt-manager virt-viewer python-virtinst
Once the libvirtd daemon is running, you can fire up Applications→System Tools→Virtual Machine Manager. Click the ‘new virtual machine’ icon in the top-left and then, basically, follow the prompts of the ensuing wizard to build your first virtual machine. And that’s about it! It’s really incredibly simple.
The only tricky bit comes if you want your new VM to look like an independent host on your network. That requires “bridged” networking, which doesn’t exist until you manually create it (it would be nice if someone was to develop a graphical tool for achieving this!) Worse, bridged connections don’t work with the fancy new ‘network manager’ way of doing networking that Fedora (and Ubuntu, actually) has adopted. So, if you want bridged connections for your VMs on those distros, here’s what you have to do:
As root, issue the command
Find the eth0 item and click the Edit button. Switch ‘Controlled by NetworkManager’ off, ‘Activate device when computer starts’ on and ‘Allow all users to enable and disable the device’ to on. Click OK and then File→Save to preserve the changes.
Now you’ve just disabled the new-fangled Network Manager, so you have to make sure the old-fashioned network control starts at each reboot:
chkconfig network on
You now create a new bridge network interface by issuing the command:
Add the following lines to the new text file thus created:
The typing here has to be precise -it’s very case-sensitive, for example, so ‘bridge’ as a “type” entry won’t work, where ‘Bridge’ will!
You now tell the eth0 interface that it is to be bridged. Do that by issuing the command:
Add the following line to the file’s existing contents:
Now you can re-start the network so the new configuration is activated:
service network restart
Note that your physical PC now connects to the rest of the world via the br0 interface, which happens to know (thanks to the edits above) that the physical eth0 is responsible for handling its traffic. But, as far as your physical PC is concerned, eth0 is actually a non-active interface in its own right. Br0 takes over that role, though functionally it all amounts to the same thing.
Finally, the trouble with this setup is that br0 is a physical network interface, seen and used by your physical PC. But that’s not much use to a virtual guest machine! So now we have to add a virtual interface to our physical interface -and that’s a job for a utility called tunctl. That utility probably needs to be installed to start with, so the relevant command is:
yum install tunctl
Next, issue these commands in sequence:
tunctl -t tap0
brctl addif br0 tap0
The first command creates an interface called “tap0″; the second command says it’s to be a virtual representation of the ‘br0′ physical network interface.
Once all that’s done, you can go back to virtual machines you’ve already created and add new network hardware -this time, a bridged interface will be available to you. You can remove the previous NAT one, if you like (or simply disable it within the guest OS). New guests can be created, obviously, that use the right sort of ‘let me at the world!’ interface from the get-go.
One final bit of advice as far as KVM experiments are concerned: having to start libvirtd manually before you begin is a bit of a pain. If you want to ensure libvirtd is started automatically whenever your PC reboots (and thus avoid the need to run it manually in a terminal session), just go to System→Administration→Services and click the libvirtd item, then the [Enable] button. Once it has a green check mark next to it, it’s scheduled to auto-start.
Apart from the bridged network issue, however, KVM is an absolute doddle to install, configure and run. Performance in the Windows 7 virtual machine I use is excellent -the only drawback is that the virtualized graphics hardware isn’t up to displaying the fancy, semi-transparent Aero interface. But that’s not much of a problem for me. I miss only two other things from my VMware Workstation days: movie capture and snapshots. KVM provides a menu option to take a still screen capture of your guest, which is fine. But it doesn’t have the option to capture screen motion/activity as a movie (this is something the freebie VMware Server product also lacks). There are workarounds, of course (yum install recordmydesktop puts a movie-capturing application at your disposal which will more-or-less do the job), but it would be nice to have the functionality built-in.
The lack of snapshots is a bit more of a drama, to be honest. There are snapshot capabilities that can (probably!) be used, thanks to the use of the qcow2 virtual hard disk format -and you’re supposed to be able to drop into a terminal and issue a qemu-img command that will do the necessary. But I haven’t tried it, I believe it only works for a VM that’s been shut down… and in any case, it all sounds a bit tricky at this stage. I’m really more after a ‘take snapshot’ button in the Virtual Manager window, to be honest! Meanwhile, there is a simple button to do VM cloning (though, again, the VM has to be shut down for the duration), which will do me well enough in the meantime. But this is certainly an area of VM management that it would be nice to see some development on in the next year or two!
Other than those slight niggles (oh, there’s one more: no drag-and-drop between host and guest), I think KVM is an excellent virtualization platform, and my trusty copy of VMware Workstation has remained firmly on the bookshelf for this PC’s recent rebuild.