Bulk Image Resizing in Linux

This is even easier than adding a watermark! If a directory is filled with, say, 30 PNG images and you want them all resized so that they’re all the same width (let’s say 600 pixels wide), then all you do is:

cd /name/of/directory/with/original/images
mkdir resizedones
mogrify -path ./resizedones -resize 600 *.png

It doesn’t get much easier than that! Mogrify is another of the utilities provided by Imagemagick and it works on any image format Imagemagick itself supports (so doing the above with a final *.jpg will work just as well …assuming your source images are actually JPEGs, of course!) If you wanted the originals replaced by the resizing (rather than, as above, being written out to a new directory), just lose the -path bit. This command, in other words, does an “in place” resizing, such that the originals are irretrievably lost:

cd /name/of/directory/with/original/images
mkdir resizedones
mogrify -resize 600 *.png

The mogrify commands takes loads more arguments and can be used to do stacks more transformations, but that’s all I need it for right now!

Bulk Watermarking of Images in Linux

Here’s a simple requirement: take a directory of 30 or so screen captures (and therefore all about the same size -in this case, 600×499 pixels) and slap a copyright notice on them. Painful fiddling multiple times if you’re using a GUI image editor (like GIMP) -and you’ll likely end up with the notice in slightly different places on each image, thanks to the ‘hand drawn’ nature of the process. But it all becomes a doddle if you’ve got access to Imagemagick’s convert function! It’s do-able with the following simple command (all on one line):

convert -pointsize 10 -fill grey80 -draw 'text 240,485 "Copyright © 2010 Diznix.com"' screenshot01.png screenshot01.png

The basic command is convert.

The pointsize switch says how big the text of the ‘watermark’ will be.

The fill grey80 switch specifies the colour of the text that will be placed on the image. Grey80 is a very light shade of grey; grey10 is practically solid black. A complete list of colour names that can be used here can be obtained by issuing the command showrgb.

The draw switch takes a single-quoted argument that says what text will be drawn on the image. Within the pair of single quotes, you get three different elements. First, you say you’re drawing text. Next, you say where the text should be added to the image, counting from the left edge rightwards and top edge downwards, measured in pixels. So, my text in this example will be written 240 pixels in from the left, and 485 pixels down from the top. Since my image is 600 pixels wide and 499 pixels tall, my text will be written in the middle-ish of the very bottom of the picture. Finally, you write the text you actually want added to the image, enclosed in double quotation marks,

Two file name arguments complete the command. The first is the ‘input image’ name, the second is the ‘output image’ name. If the two are the same, as they are here, you’ll end up adding the text to the source image (a process which cannot therefore be reversed). If the second name is different from the first, you get the image-plus-text output as a new file, leaving the text-less original untouched.

And here’s the outcome of all of that. This is the original:


And here’s the processed version, with a discrete copyright ‘watermark’ added:

Job done! Not only is it relatively simple to do, it’s precise, to the pixel, as to where the text notice is placed on the image. So, repeat as many times as you like on different images, and so long as that “240,485″ bit (in my example) doesn’t change, the notice will always end up in exactly the right spot, image after image. All that remains is to wrap the thing up into a bash script loop so that all 30 images can be processed in one hit. The following is a trivial way of doing that, neither refined nor particularly original:

#!/bin/bash
echo -e "All images in this directory are about to be over-written."n"Are you sure want to continue? {Y/n}"
read REPLY
if
  [[ $REPLY = "Y" || $REPLY = "y" ]]; then
  file -i * | grep image | awk -F':' '{ print $1 }' | while read IMAGE
    do
      echo Watermarking $IMAGE
      convert -pointsize 10 -fill grey80 -draw 'text 240,485 "Copyright © 2010 Diznix.com"' "$IMAGE" "$IMAGE"
    done
else
  echo exiting
  exit 0
fi
exit 0

Save that somewhere appropriate (like /usr/bin) with a name like watermarkme.sh and change its permissions to allow execution (chmod 775 /usr/bin/watermarkme.sh, for example), and then you can invoke the script at will:

cd /home/hjr/Desktop/screengrabs
/usr/bin/watermarkme.sh

Thank heavens for Imagemagick (and the command line!)

Drobo No-No

I wrote a few days ago that I’d had trouble getting a second Drobo storage device that actually worked. Well, it was eventually replaced and now works fine (except that one of its LEDs is blown, but at least the storage side of things seems functional at last).

I also mentioned back then that I’ve owned my original Drobo for getting on for a year, and that it’s never put a foot wrong -which is absolutely true, and is the reason the malfunctioning second one was such a disappointment.

I would like to take this opportunity, then, to mention that even that original Drobo is behaving really badly when it comes to a disk re-organization. On Monday, I swapped out one of my 1TB drives for a 2TB one. This causes the Drobo to shuffle around the 2.3TB of data currently stored there until it’s all protected once more. No problems with that: it’s an online affair (i.e., the data stays accessible throughout) and is precisely what the Drobo is supposed to be good at.

It’s just a little disturbing that it’s now Wednesday night and the thing is still re-organizing itself. What’s more, the Dashboard application that tells you where it’s up to is showing 0% progress and warning me that it expects to have finished in about 98 hours’ time. If that’s true (and quite frankly, I have no idea if it is or isn’t), it will mean the thing has spent the best part of a week shuffling around, internally, just 2TB of data. Asthmatic snails have been known to move faster, I fear. My friend Google has drawn to my attention the fact that this sort of behaviour isn’t new, either!

Again, I stress (as do the Drobo website articles that mention it might take “a few” hours to reorganize!) that my data is accessible while all this is going on. But to have one’s data unprotected for that length of time (it’s only fully protected once the thing has finished its reorganization) is, I think, unacceptable.One hard disk failure now, and I lose everything.

Worse, the Drobo website emphasizes that you should not power down the Drobo during a reorganization… and we have a giant thunderstorm predicted to come our way tomorrow evening. They quite often knock out the power, so you can imagine the amount of nervousness here at the moment!

Couple this technical deficiency (as I see it) with their mostly-hopeless technical support and I would have to say that, despite a year’s unproblematic storage duties, I would never touch a Drobo again. It’s lovely at just sitting there storing stuff, true enough. But when it comes to doing the one thing you actually need it to do without drama (i.e., deal with a disk failure or upgrade), it doesn’t cope at all elegantly or efficiently.

Since Drobo Number 2 was purchased for a friend of ours and has thus left the building, I am still in the market for a storage device that can handle 4TB and up, extensibly and dynamically, because I want a backup for my Drobo, dumb as that might sound given that the Drobo itself was purchased to be our secure and safe storage device! Such a storage device needs to dynamically extend its storage ‘pool’, but hopefully without days of unprotection whilst doing so.

I have thought about building my own PC and sticking ZFS on it (which does all of that sort of thing very nicely), but it’s a little too ‘low level’ for my tastes! So now I’m looking at this. The Thecus has all the right specs (for me at least) and an attractive price (about AU$390… compared to the Drobo’s AU$550 and up). It, of course, doesn’t have the looks (Drobo is to Mac as Thecus is to mid-1990s beige PCs, I think!), but it looks like it might have the functionality I require. And, best news of all, it’s nearly Christmas, so I don’t have to put up a watertight business case to get one… I just have to ask!

Should Santa do the honours this year, I’ll be sure to document how I get on with it.

Gmail Consolidation

I appear to have acquired quite a few email accounts over the years, most of them Gmail-based ones. It would be nice to have them consolidated into the account I use currently… and, happily for me, it’s not only nice but easy to do. It basically involves two simple pieces of configuration, one on the original account and one on the new, like so:

Log onto the old Gmail account.

Click Settings and select the Forwarding and POP/IMAP tab:

Ensure that POP is enabled for all messages. Since I’m effectively killing this old email account off completely, I also make sure that when messages are accessed with POP, the delete Gmail’s copy option kicks in. The default option is to ‘archive’ them instead, which is fine (it will still look like the old account’s been cleared out when you casually glance at it), but deleting them completely makes more sense to me.

That completes the setup for the old account.

Now you switch to the new account (it helps to have a couple of different types of browsers for this, so that you can be logged in to one account in one (say, Opera) and logged into the other in, say, Firefox!). Again, go to Settings. This time, go to the Accounts and import tab. In the block for ‘check mail with POP3′, click the Add POP3 email account button. Type in the old email address you’re aiming to import and click the Next step button. Now you fill in the form with the appropriate details:

Since it’s an old gmail account we’re transferring from, the defaults are fine: port 995 on smtp.gmail.com, using a secure connection. Again, there’s an option to ‘leave a copy of a retrieved message on the (old account)’, but it’s off by default and I think that makes sense. I suspect this option over-rides the option I set earlier ‘delete’ or ‘archive Gmail’s copy’, but it does no harm to make sure both options, on either end, are set to achieve the same thing!

Click the Add Account button, and you’ll be asked whether you ever want emails from your current account to be addressed in such a way that they look as if they came from the old one:

As you can see, I think the appropriate answer here is ‘No’, since I’m effectively trying to close down a gmail account, not make it appear to still have life in it!

Otherwise, that’s about it, really. Click Finish to complete the process and the new account will (eventually -it can take many, many minutes) poll the old account for its contents, copy them across and delete them from the source. Net result: old account transferred to the new (at least, as far as emails are concerned. I don’t have contact details dotted about the place, so I had/have no interest in getting those transferred across).

I suppose that really only leaves one other question: is it possible to close a gmail account completely, for ever? After all, once the old emails are successfully across to the new, what’s the point of keeping the old account in the first place (some of mine go back to 2005 or earlier and haven’t received anything other than spam for ages. It’s not like anyone will mind that the old address ‘dies’, therefore).

Well, the answer to that one is again quite simple: yes, you can delete an old gmail account, and here’s how.

First, log onto the old account as before. Then click Settings, and the Accounts and Import tab. Down the bottom, click the link to Google account settings (found in the ‘Change account settings’ section). A new page will appear:

Click that “edit” link, next to the “My products” heading. You’ll now see something like this:

So, you’re now close! Click that last link (“Close account and delete all services”), and you’ll then have to confirm that you know what you’re doing:

You’ll need to switch on a check mark next to each item in the list, provide your password, and finally click the Delete Google Account button. You’ll then see a fairly curt message to confirm that your account has really been deleted… and that’s the job done. Once you log out, you’ll not be able to log back in with that username, for it has ceased to be!

There’s just one other minor detail to worry about… back in your new or current Gmail account, you’ll still have a ‘please import via POP3′ entry that points to the just-deleted account. You can go and delete that now, since you won’t need to import anything further from something that no longer exists! (The good thing about doing this bit of tidying up, too, is that you’re only allowed to do a POP3 import with a limited number of accounts. Having one in the list that isn’t actually functional means you’re taking up one of your slots for no good reason. So, housekeeping!)

Repeat all the above as necessary until all those old, spare and redundant Gmail accounts no longer exist!

After that, you’ve only got your non-Gmail old accounts to worry about… but that’s a story for another post. :-)

Oracle 11g Network Access Control Lists

This has tripped me up a couple of times, so it’s time to document the workaround!

In 10g, you might have written a bit of PL/SQL that invokes the package utl_smtp so that the database sends you an email about something -and, although the code would definitely have ended up quite ugly (utl_smtp is like that!), it would have been entirely functional. Upgrade to 11g, however, and that same code will now error out with an ORA-24247 network access denied by access control list (ACL) message. Utl_smtp is not the only package affected, either: utl_http, utl_tcp, utl_mail and utl_inaddr are all similarly affected. If you wrote code that referenced any of these packages, it would break in 11g having worked fine in 10g.

The reason for the breakages is that 11g introduced tighter security on access to all networking services and this security is enforced by Access Control Lists (ACLs). The ACL-based security works independently of package grants… so although a user might have execute permissions on utl_smtp, if he doesn’t also have an ACL allowing him access to the smtp networking service, he’s not going to be sending emails anywhere.

In the right context, this extra security is a great thing… but in the context I work in, it’s a royal pain in the butt! I just want my old code to work -and I don’t want to have to do battle with another layer of user privilege management to get it to do so.

Here, then, is the “can we please pretend we’re running in 10g again” fix. It blows the whole ACL idea out of the water by simply creating an ACL that says everyone can access every networking service, no questions asked. Run it as SYS:

begin
dbms_network_acl_admin.create_acl (
   acl          => 'networkacl.xml',
   description  => 'Allow Network Connectivity',
   principal    => 'PUBLIC',
   is_grant     => TRUE,
   privilege    => 'connect',
   start_date   => SYSTIMESTAMP,
   end_date     => NULL);

dbms_network_acl_admin.assign_acl (
   acl         => 'networkacl.xml',
   host        => '*',
   lower_port  => NULL,
   upper_port  => NULL);

commit;
end;

The first bit of code simply creates an ACL and grants rights to it to PUBLIC. The second bit says that the ACL just created doesn’t restrict hosts or port ranges (which effectively means it’s restricting stuff-all). Your old 10g code will now run just fine in 11g once more.

Obviously, this isn’t subtle and there are good reasons why Oracle tightened up the security surrounding access to these networking services… which my code simply ignores and pretends aren’t an issue. So this is not exactly “world’s best practice”! But in the right situation (basically, one where you are in a hurry to have things behave as they did in 10g and never mind the implications), then this code will do the trick.

Fonty Goodness

I wrote recently about getting good fonts, like Gill Sans and Frutiger, onto Linux -though, strictly speaking, you can’t do it without butting up against licensing restrictions. So it’s nice to be able to point out that Google can be your fonty friends!

In their code warehouse, Google have an area that is all about providing good fonts for free. Click on any of the fonts named and you’ll see examples of it in use, information about the font in question and, most importantly, under the Get the code tab, down at the bottom of the page, a Download the font link. Save them to your desktop, right-click and open with Debian’s Font Viewer …and click Install from within there.

The fonts available aren’t spectacularly good, but Molengo is a distant relative of Gill Sans (eye-glass ‘g’, for example). Inconsolata is a handy replacement for Courier (or the Windows-only Lucida Console, for example). And Neuton is an interesting serif typeface with a little more rounding and a bit more fun than your usual Times Roman! Best of all, perhaps, they can be incorporated into any webpage without your viewers needing to have them installed. Good old CSS links ensure that the font is retrieved dynamically should a visitor not already have it installed.

But it’s nice to be able to download and install them legitimately on Linux, too. Kudos to Google for that!

Quickie NAS

I never actually used Windows Home Server (WHS), but I thought about doing so often enough. It’s killer feature (for me)? The ability to plug in different disks, of different sizes, from different vendors (and even using different interfaces -I have a lot of old PATA drives kicking around!), and have them appear to the rest of the world as one large storage ‘pool’, with in-built redundancy. This was called ‘Driver Extender’… and has just been removed as a feature from the new Version 2 Home Server product. It seems a bit of a weird decision on Microsoft’s part, removing one of the two key product differentiators that made Home Server special. It wouldn’t surprise me to see the entire thing killed off, to be honest.

Anyway, the reason I no longer care too much about WHS is that I have my own way of doing networkable, extensible bulk storage: a Drobo with a WDTV Live media player.

My particular “first generation” Drobo only takes 4 hard disks -newer and more expensive ones can take up to 8. But using 2GB drives, that still means 6TB of usable storage (4 x 2TB, minus 2TB used for data protection), which is enough to be going on with. If 3TB and 4TB drives ever make an appearance, I’ll probably be a firmware update away from being able to increase my protected, usable storage space accordingly. Regardless, you can stick any combination of SATA drives into the Drobo you happen to have handy, and swap out smaller drives for bigger ones as your storage needs grow (and as your wallet finds it can cope). There’s no networking capability (you’ll need to pay stupid money for the Drobo Share, or the Drobo FS to get that), but you do get extensible, protected, set-and-forget storage that more-or-less just works (see below for the ‘more-or-less’ bit!).

The WDTV Live is a good media player. I had a plain old WDTV before the ‘Live’ version came out, and the upgrade gets you a networked media player. Set it up with an IP address, plug in an ethernet cable and it immediately makes itself visible as a Windows (well, Samba, anyway!) Share on the local network. Other than that capability and a slightly slicker front-end, there’s not a lot of difference: the thing is still capable of playing just about any media format you throw at it, has no problem with High Definition content, has a lovely “ten foot interface’ that anyone can drive within seconds… and just works, beautifully.

Stick these two products together, then, and what do you have? Basically, the Drobo just plugs into the WDTV Live via USB, is then seen as a single giant volume full of multimedia files… and the contents of that volume are then shared around the rest of the network, thanks to the Samba-sharing nature of the WDTV. When I rip a new CD on my PC in the study, therefore, it’s trivial to copy the output to the Drobo sitting under the TV on the other side of the house, despite the Drobo not having ‘intrinsic’ networking of its own. So what you actually end up with is a NAS that does excellent duty as a media server and player. Someone should design a product that includes both bits of functionality in the one box!

The networked Drobo FS costs about AU$850. The standalone Drobo Share costs AU$300. Neither would be able to play a bean on my TV! My non-networked Drobo cost AU$599, and the WDTV Live cost a further AU$189… so I end up with viewable, networked, extensible, protected storage for AU$788 instead of non-viewable, etc, etc for AU$850 – AU$900. (Hard disks cost extra, of course).

I’d thoroughly recommend the WDTV Live… it’s really plain sailing to use, and you couldn’t get a more capable, simpler media player. We ditched the original WDTV player a year or so ago for the joys of Windows Media Center running on a spare PC… but the usual Windows problems meant that experiment turned into something of a disaster (crashes, driver problems, forever updating etc etc). We were so pleased to be able to junk the complexities of Windows for the highly-functional simplicities of the WDTV once more!

I wish I could recommend Drobo quite so unreservedly. If you’d asked me three months ago, I would have done. But since then, I made the mistake of purchasing a new one for an elderly friend. I mention the ‘elderly’ bit because his requirements, above all, are for something that simply works, without fuss, bother or the need for constant fiddling and tweaking. He is a very non-technical person, and his movie collection needs to be safe without having to think about it. A pity, then, that the Drobo unit I purchased for him turned out to be defective: it didn’t work at first, it then worked long enough to copy a couple of terabytes onto, and then it decided not to work again once it had been plugged into a different PC. It would hang during its boot sequence; it would declare it couldn’t find some disks, then decide it could see those after all but now couldn’t see the ones it had no problems seeing before; it would not be detected by Windows 7 at all, and then it would be detected without a problem, until you rebooted the PC -after which it would revert to being undetectable. It was bonkers, frankly. Precisely what you don’t want when you buy ‘safe, protected, reliable storage’!

Naturally, you get the odd lemon turning up whenever you take the hardware-purchasing plunge, but I can tell you: getting one lemon makes you have second thoughts about the earlier purchase that has never put a foot wrong! It just undermines your confidence in the product as a whole, in short. And it doesn’t help that their “support desk” has the same senseless, robotic and dumbed-down attitude that all support desks seem to go for these days. All I wanted was a returns authorisation number. Instead, I get asked to produce a diagnostic log. Fair enough: I try to do that, and I can’t because the unit is completely unrepsonsive. I tell them this. They reply not with a ‘Jeez, it’s screwed then!’ but with a ‘well, can you try using the Firewire port instead of the USB one’! I don’t even have a Firewire port, I point out. Well, I’ll need to escalate this to the next level of support, I am told. No you won’t, I say… either authorise the return right now, or I start consulting lawyers. At which point, the return was authorised without further comment!

I don’t like the fact that I had to wrestle with them like that. The second I couldn’t produce a diagnostic log because the unit had hung, they should have authorised the return. The suggestion to plug it into a Firewire port reflects the fact, I think, that Drobos are very popular in Apple Mac circles… and I imagine the dumbed-down, treat-you-like-a-moron, have-you-tried-turning-it-back-off-and-on style of support is designed to cater to that particualr type of audience. It didn’t do anything to endear me to them or their product, though! (Can you tell??!)

Anyway, I’d like to say that the guy who actually sold me the thing couldn’t have been nicer or more solicitous: he’s gone out of his way (literally: he turned up at the office today to pick the defective unit up personally) to see me right with a new Drobo that works properly. Time will tell on that score, I guess, but I can’t really fault his efforts thus far. Meanwhile, my own, original Drobo sits there quietly under the telly doing sterling service without the slightest issue. So yes, on balance, I would still recommend it. Just make sure you get an excellent vendor -and don’t waste too much time with their useless technical “support”. My vendor, by the way, who comes highly recommended, is Ross at Ineedstorage.

VMware Workstation Glitch

I opted to install various updates today to my Debian “Squeeze” PC. I’m afraid I don’t know precisely what updates they were, because (like a lot of people, I suspect!) I didn’t bother to pay a lot of attention to the list of what was about to be done to my system. Whatever it was, where I was happily VMware-ing before the update (and for hours afterwards), after I’d rebooted the PC, I couldn’t start the virtual machine any more: the program kept complaining about “couldn’t find /dev/vmmon”, and when I manually tried to modprobe it, it came up with all sorts of errors indicating kernel troubles.

There being no obvious way to fix things (Google doesn’t help much on this, with all suggestions I saw involving re-running vmware-config.pl, which simply doesn’t exist on my system for some reason!), I resorted to the crowbar method of software maintenance: I ran /usr/bin/vmware-installer -u vmware-workstation and got the software uninstalled. Then I re-ran sh VMware-Workstation-Full-7.1.0-261024.x86_64.bundle to reinstall it… and after the re-install, it all worked as fine as before.

God knows what the trouble was, but let that be a lesson to you: don’t update (especially on a testing branch of Debian!) unless you really need to!

Before I decided on the re-install route, I did dabble with the idea of giving up altogether and using VirtualBox once more. The open source version is available in the Squeeze repositories, and I figured that if it was there, then any updates affecting those repositories is likely to leave VirtualBox at least in working order. Of course, I wouldn’t want to have to rebuild my VMware Workstation virtual machine… a lot of software and OS updates have gone on in it.

So is it possible to open a VMware Workstation VM in VirtualBox?

The answer to that is, ‘yes, very easily’:

  • Make sure your VMware virtual machine has no snapshots. Delete them if it does.
  • Start VirtualBox OSE (Applications → System Tools → VirtualBox OSE)
  • In File → Virtual Media Manager, click Add. Point to the .vmdk file of your VMware virtual machine. Double-click to select and add it, then click OK to close the media manager window.
  • Click the New button to create a new Virtual Machine, fill out the memory details as you think appropriate. Select the Use existing hard disk option when shown, and select the .vmdk file from the combo box.
  • Boot the new VirtualBox virtual machine as normal. Windows will, of course, have an absolute fit at the amount of hardware you’ve changed, but if you give it long enough, it should come good as it detects it all and re-configures itself accordingly. You may need a couple of reboots before everything is detected and installed perfectly.
  • Don’t forget to install the VirtualBox Guest Additions so that you get proper video/mouse handing for the new environment.

When I first did this, the machine would simply boot to a blank screen, sit there doing nothing and otherwise just die. Rebooting a couple of times, the same thing happened each time. When I checked the logs for the VM (found in my /home/hjr/.VirtualBox/Machines/Windows XP/Logs directory), I saw I was getting the error message: BIOS: int13_harddisk: function 15, unmapped device for ELDL=81. I had a bit of a guess at this one: powering down the VM, I went into Settings → System and switched on the option to Enable IO APIC. Once I booted the VM once more, it worked perfectly.

One other gotcha: the networking in the VMware machine was fine, but the guest OS when running in VirtualBox declared that no network was available. The reason for that is that my VMware machine had been built with Bridged networking, but VirtualBox creates new VMs with Network Address Translation (NAT) networking instead. Power the guest down and edit the Settings → Network to switch to the Bridged Adapter option.

One quirk that is probably peculiar to my setup: my VM has Checkpoint Secure Remote installed on it (it’s the VPN I use to connect to work, since they don’t seem to be able to run to a VPN that has a Linux client!). That prevented my new network card from functioning properly -but once I’d uninstalled Checkpoint and rebooted, the network card was detected fine. Then I was able to re-install Checkpoint and connect to work as normal… yet another instance of crowbar software maintenance, I’m afraid, but at least it worked (again)!

Not too painful, all things considered -though I’d suggest trying it out on a copy of your VMware virtual machine before you let rip on the real thing! Getting networking, er, working was probably the hardest part of the entire affair.

The extent of the hardware changes the guest OS has to deal with are enormous, so it was frankly a bit of a surprise that Windows XP coped… whether later versions of Windows would is a matter for speculation, I guess. I would expect such a degree of hardware change to prompt re-activation of Windows, though (the reason I tend to stick with XP in my VMs: no activation required, at least for my ancient installation disk!).

There is an alternative to this approach, which involves using qemu and stuffing around actually transforming a .vmdk into a native .vdi file via a .bin intermediate file, but there’s really no need, so that’s not an option I’ve experimented with.

I’ll probably stick with using VMware Workstation (having paid for it, after all!), but there’s something to be said for having your virtualisation technology available from the repositories… so we’ll see!