Dizwell Informatics

News from Nowhere

Leave well alone


ZFS not being a viable option on Fedora, I wanted to create a RAID5 array using mdadm, formatted with XFS.

There are lots of suggestions around the web about how best to do the XFS formatting to achieve optimal performance.

Essentially, the optimzation process revolves around setting stripe unit and stripe width settings. What the “correct” values are for those things can vary according to who you read (and, more pertinently, what you expect to store on your array -lots of little files, or a preponderance of very big files (like virtual machine images, movies and so on).

On the other hand, there’s a school of thought that says XFS is reasonably smart and if you just let it default to automatically-determined values, things will probably be ok.

So I thought I’d check it out.

First, I created my array and let mdadm complete the initial ‘recovery’, so no background I/O was taking place. Then I formatted it as follows:

mkfs.xfs -f -b size=4096 -d sunit=512,swidth=1024 -L bulkdata /dev/md0

And then I benchmarked the array using Gnome’s built-in disks utility:

So not too shabby: 145MB/sec read, 69 MB/sec write and an access time of 15.5msec.

Then I reformatted the array like so:

mkfs.xfs -L bulkdata /dev/md0

Re-measuring the performance in the same way as before, these were the results:

This time, reads of 165MB/s, writes of 74MB/s and access times of 15.5msec. So, significantly faster reads, moderately faster writes and access times about the same: I’d say letting XFS work it out for itself is probably as good a strategy as any -for my hardware, at any rates.

I repeated the tests multiple times, and I also introduced different formatting options with different values for the stripe unit and width and blocksize: I could certainly get worse results, but I was never able to improve on the ‘just let it work it out’ results.

Your mileage may vary, of course. But me: I’m leaving my XFS array well alone and letting it sort itself out!