Building a FileVault’ed RAID 10 Array on OS X (10.8.4)

I’ve been using a raidz1 ZEVO ZFS volume on my workstation1 for the better part of a year now, but with the future of ZEVO being somewhat up in the air and the current alternative of MacZFS not meeting my needs2, I decided to buy a USB 3 JBOD off Amazon and build an AppleRAID set. It isn’t as good for data protection as using ZFS but is a bit faster and OS X software3 and services4 don’t get really weird about using it either. I also have a FreeNAS server that I use for things I’m really particular about, so I don’t think I’m making that much of a compromise.

I did want it to be as quick as possible with disks I already had from the old NAS that I replaced with the FreeNAS5, but I also wanted to be able to continue working when one of the drives undoubtedly fails without downtime. For redundancy and performance, Apple steers you towards RAID 10 which is using stripes (multiple disks that look like one) and mirrors (multiple disks that have the same data). In light of this, I built a 4 disk array of 1TB drives into a 2TB RAID 10 volume by doing a mirrored pair of striped disks which I then converted to a core storage volume to facilitate expansion later and so that I could use FileVault whole-disk encryption.

Setting up the disks was easy, I did all of that in Disk Utility and created three raid sets. One I set to mirror, and the other two I set to stripe. I then dragged in the disks I wanted to use to their appropriate locations in the stripe sets, and then moved the two stripe sets into the mirrored set.

My final volume name will be cornballer so I named the striped devices cb1 and cb2.

Disk Utility RAID stuff

When I look at my disks in the CLI tool diskutil, I see this:

Core Storage Stuff:

Creating a Core Storage volume is done only via the CLI at this time, it isn’t fully baked yet, but works well. This is also the means used for people that build their own “Fusion Drive” stripes with a fixed disk and an SSD. I’m not certain of this, but I suspect if I created a mirrored RAID of two SSDs I could add that to my logical volume family created by core storage and I suspect it would work as any other Fusion Drive except with redundancy and the ability to recover from a failure without losing data.

Converting my RAID 10 set into a Core Storage volume is simple, yet none-the-less a bit terrifying. Pick the wrong device and you can destroy data, so I had to be clear on what I was doing! My formatted and ready-to-use RAID 10 volume was disk9 when I listed all devices via diskutil list, so I unmounted it just to be safe. I ran diskutil list again to be sure that my unmounted volume was still presently recognized and at the same device number, and then blew it all to hell:

Using diskutil list will only show you traditional volumes and devices. It will not show you the particulars of logical volume groups created with Core Storage, to do that you need to use the the verb ‘cs’. Now when running diskutil cs list I saw a new set of entries for this logical volume group, and the associated unique identifiers (UUID). Device names like ‘disk9’ can change on reboots or hardware changes, so using UUIDs is a great way to make sure you’re working with the right disks. You also must use UUIDs when using Core Storage and creating volumes, so don’t dismiss them when they scroll by.

It’s time to create a usable filesystem on my new Core Storage volume!

This means “hello there diskutil, please use Core Storage, create a new volume on UUID blah, make it a Journaled HFS+ filesystem using 100% of the storage available, and ask me for a passphrase to unlock the disk”. Give it the passphrase you want to use (you can add it to your keychain later so don’t be shy — mine is also a note in 1Password) and you’re done! The 100% designation means I want to use all the space I have available, but you can resize them later in Core Storage’s commands (sweet) or create partitions and mix filesystems if you want to6.

When I look at my disks in the CLI tool diskutil, I now see this:

This volume has performed very well so far, certainly feels faster in use than my ZEVO raidz1 volume, and after doing a series of tests copying data back and forth, I promptly rsync‘ed my data from the raidz1 to the new volume without incident except for one instance where I was stupidly following symlinks recursively and didn’t notice until 4 hours later while it was still writing in a loop.7 I’d still rather be using ZFS for many reasons, but this path at least removes some complexity in the future and doesn’t leave me at the mercy of a vendor and product that has been up in the air twice now without much communication from the developer and management of the company.


  1. named ‘lindsay’ 

  2. I think in the near future it will get a lot better as there is active development on a next generation of the software 

  3. like iTunes 

  4. like iCloud 

  5. It was an Infrant/Netgear ReadyNAS NV+ with a slow-as-shit SPARC processor that couldn’t push files at gigabit ethernet 

  6. i.e. you could start with 70% jhfs+ and then add new 10% fat32 and an extra 20% jhfs+ filesystem 

  7. Way to go, doofus!