Well, not solved exactly. But ‘explained’ at least! (See the previous post for a description of the network speed problem that was annoying me).
My initial data copy from Server 1 (using SATA legacy mode) to Server 2 (using AHCI mode) finally completed, I finally got around to implementing the normal sorts of data safety checks you do with ZFS, including scrubbing my volumes (procedures for doing so, incidentally, I have now written up as a new Solaris article).
At this point I noticed something. Here’s server 2’s scrub progress report (AHCI):
[email protected]:~/logs# zpool status safedata pool: safedata state: ONLINE scan: scrub in progress since Mon Apr 25 16:56:31 2016 734G scanned out of 7.32T at 360M/s, 5h20m to go 0 repaired, 9.79% done config: NAME STATE READ WRITE CKSUM safedata ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors
I’ve highlighted the important bit as far as this issue is concerned: my zpool is managing 360M/second -that’s 360 mega bytes per second, which isn’t bad for a 4-drive array, when an individual SATA disk ought to be capable of doing around 100MB/sec, especially when you take into account that a scrub is doing a lot of computational work that has nothing to do with disk I/O.
And here’s the equivalent report from server 1 (using legacy SATA mode):
[email protected]:~# zpool status safedata pool: safedata state: ONLINE scan: scrub in progress since Mon Apr 25 17:35:31 2016 48.4G scanned out of 7.32T at 169M/s, 12h34m to go 0 repaired, 0.65% done config: NAME STATE READ WRITE CKSUM safedata ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c2d0 ONLINE 0 0 0 c2d1 ONLINE 0 0 0 c3d0 ONLINE 0 0 0 c3d1 ONLINE 0 0 0 errors: No known data errors
I think the fact that this box is managing only half the I/O rate of the other is the lion’s share of why the copy across the network was going so slowly: if server 1 can only fetch stuff from disk at the speed of a somnolent snail, it’s not surprising it can’t send it over the network at a decent rate either.
Tomorrow, then, I shall be purchasing a second el-cheapo PCIe SATA expansion card and rebuilding server 1’s zpool. We shall see what speeds I get then… watch this space.
Update: I assume that there must be a world-wide shortage of PCIe SATA adaptors, since I just spent my lunch hour wandering around the IT shops of Chinatown, including MSY, and being told at least 15 times that they were out of stock. Have ordered one from Amazon, which means I have to wait until mid-May before being able to sort this out. Frustrating!