12TB NAS - A not-so-little update

Written on 12/10/2012 by Patrick Bregman

Once again I changed a few things about the hardware for my NAS. Or rather, the intended hardware. For now I'm still running on the Intel DH77DF with an Intel Xeon 1265LV2 and 16GByte of RAM. I added an 2TB disk to the system to be able to use it like a desktop system. The 64GByte SSD is being used to boot the OS from and all the programs, and the 2TB disk is being used for all my personal data and source code etc.

So, what did I change this time? Well... Quite a few things. Try to follow. I started out with an IBM M1015 (a LSI MegaRAID in disguise) for my disks, then I one-upped that to a LSI SAS 9201-16i. That should enable me to connect up to 16 disks directly to the controller. I thought of this because I wanted to have some space left for eventual ZIL or L2ARC SSDs. But I recently dropped that idea again, so I'm back with a simple LSI SAS controller. There's a good change that it's going to be the IBM M1015 again, but who knows.

Also, I've been thinking about going bigger. I mean, 12TByte of storage with 5 3TByte disks is nice and all, but what about doing 5x 4TB for big data (like movies, video projects, Time Machine backups, photos etc) and 3x 2TByte for other stuff. This would give me a whopping 26TByte of raw storage, and around 20TByte of usable storage. Quite a step up from the original 12TByte plan. The 2TByte disks will be notebook disks from Western Digital (the Scorpio Green), but the downside is that they're 15mm thick. And that's probably too thick to fit into my case. If that's the case then I'll probably revert to something like the 1TByte Scorpio Blue or the 750GByte Scorpio Black.

And last but definitely not least, it appears that ZFS on Linux is rather stable nowadays. While it is probably still missing a few features, like write caching, it does work and it works rather fast. I tried it on a bunch of Cruzer Fit 8GB USB drives, which are horribly slow. I managed to get an average write speed of 5.0MByte/s when writing a 1GByte file to a RAID-Z1 vdev on those sticks. Normally one stick manages to push 3.5 to 4.5MByte/s, so I'm not all too sure if this is all right. One thing is sure, it's too slow to be cached in memory.

Reading is a whole different story though! I managed to get read speeds up to 45MByte/s off those things. I did connect them to some random USB ports on the back of my Intel DH77DF, and apparently they are controlled by different USB host controllers because I never ever saw Linux go over 30MByte/s on USB. I always explained this with USB being a half-duplex protocol with a maximum bandwidth of 480Mbit/s or 60MByte/s. Half of that is sending, half is receiving. So 30MByte/s each way. This was a pleasant surprise for me, since I didn't expect this at all. While reading I noticed that all the stuff was being cached. If I tried to read the same 1GByte testfile again it read at a speed of roughly 9GByte/s. I may be wrong, but I doubt USB is really capable of that :)

I also toyed around a bit with compression. As sample data I picked the Linux kernel code for version 3.6.9. I downloaded the .tar.xv version and extracted this onto volumes with different compression schemas. The results where... kinda weird. See for yourself:

  • With ZLE compression I managed to get a compression rate of 1.04x
  • With LZJB compression I managed to get a compression rate of 2.00x
  • With GZIP-9 compression I managed to get a compression rate of 3.55x

GZIP-9 made the archive use only 154MByte of space, while ZLE used roughly 460MByte. LZJB was somewhere between that. The size of the GZIP-9 compressed volume kinda surprised me, since the GZIP compressed kernel archive is only 99MByte. What is happening here? Anybody got a clue?

So far, I'm blaiming it on the blocksize. I expect the kernel archive to be using 900k blocks, while ZFS is using 128k blocks. I also expected LZJB to do better since the whole Linux kernel tree is simply text, but it did not. I am missing a few compression algorithms though. The biggest one I'm missing LZMA, which is used by 7-Zip and others. An other compression method I'm missing is LZO, but it looks like there's been some work on that. For now, I'm not entirely sure which compression method to use, if any.

blog comments powered by Disqus