I recently set out to build a NAS (Network Attached Storage) box.
What’s a NAS?
It’s basically a computer that’s always on, network accessible, and has lots of storage. The idea is that the computer consumes low amounts of power and is quiet, so it makes sense to run all the time, which lets people on the network access a much larger storage pool than normal.
Why Build a NAS?
Or, why did I build a whole different computer to provide storage?
Take laptops as an example: laptops don’t have much space, especially with the advent of SSDs. An external drive can expand the amount of space available, but then you’re always one elbow away from knocking the drive off the table to possible doom. Even if you don’t damage the drive yourself, drives are complicated mechanical contraptions with a shelf life, and they simply go bad. And, you have to plug the thing in: what is this, the 1990s?! And we haven’t even added in multiple computers that all want different data, but on the same drive.
So instead, you take the disks you were going to use for the external drives, have multiple copies of it in a RAID (Redundant Array of Inexpensive Disks). RAIDs are nice because a disk failure means you just have to buy another drive, instead of a catastrophe when you find out you haven’t been backing up your data for the last year and a half. Then you stick the whole thing on your LAN (Local Area Network) so it’s freely accessible to everyone you trust.
Yes, there are solutions out there to host your data in the cloud, like Dropbox, Box, Google Drive, OneDrive, etc. It’s probably the right answer for you, average reader; back of the envelope, it’s going to take me 4-6 years for my NAS to recoup costs, and by that time it’s likely that my costs are going to go up even more when I need to swap out some drives. It only makes sense if you’re going for lots of storage, or you need access to big files over gigabit networking and your internet service provider has a relatively small pipe, or you’re super paranoid and having control is more important than outsourcing your storage to actual security professionals. And we’re still ignoring the fact it’s just a measly 3-5 disk RAID, against services focused on data storage with absolutely bonkers data integrity guarantees.
I looked at all of those concerns pointing towards sane, simpler options and then said “nah, I need a NAS”.
Let’s build a NAS
There are options to build a pre-made NAS, like a Synology, but they’re expensive; a 4 drive slot option is $350 (in 2017), for what is essentially a really low power computer, as well as lacking the nice properties of ZFS. Springing for a more capable option, like FreeNAS’s pre-built boxes, would cost a lot more at $1000 for a 4 drive slot option. Can we do better?
The obvious place to start is with Brian’s DIY NAS blog posts. Brian makes a NAS from off the shelf parts every year, just for funsies. I highly recommend reading his blog if you’re interested in building your own NAS. Contra the DIY NAS tracks, I’m aimed towards the middle of the cost range, not trying to get the cheapest options, but neither am I going to spend $900 on a motherboard/CPU combo.
But first, some background on physical space requirements. I have a few electronics that I’ve placed in an IKEA shelving unit, the KALLAX. Given city floor space, I would really like to have a case that fits in one of those boxy shelves, which is roughly 23x23x39 cm (13x13x15 inches). It’s just not one compartment I was budgeting space for, though: ideally, I could fit it into half a compartment, so I could do whatever with the other half, even (gasp) putting another computer into the other half.
Unfortunately, when looking at Newegg it looks like this market segment is woefully under served: the cases are all too large (mostly too tall), have a disappointing number of drive slots, are way too small (the U-NAS NSC-800 just seems way too small; who wants to try and shove a 6 inch motherboard into that small a space?), or have a disappointing lack of cooling (case with a single 60mm fan? Might as well just put the drives on a BBQ grill directly). If there wasn’t just one problem, then it was a horrifying combination of the above.
The sane answer is to just buy the expensive pre-made NAS. The cool answer is to design and build a case yourself.
Take 1: Lots o’ wood
Ponoko has an option to laser cut plywood, so I planned on doing that; I had worked with Ponoko’s laser cutting service before, and I wouldn’t need to pay out the ear for metal working I couldn’t weld/rivet/bend myself. Besides, Ponoko had a 1/4 inch thick wood option, that should be plenty thick, right?
So I whipped up a design over multiple weekends: I would use traditional finger joints to join the planes of wood together, with a fancier slot and screw option for the case “lid”.
The whole system would take up the entire KALLAX compartment height, using two large and quiet 140mm fans on the front to push air from front to back: the power supply unit (PSU) and another 140mm fan on the back would push air out the back. The motherboard would not stick out the back: there’s just too much fan real estate it would take up. Instead, the I/O ports would point upwards because most of the time nothing needs to be plugged in to a NAS. Well, except the network cable; to accommodate that, I inset the motherboard about an inch inside the case, and planned for a hole in the back to string the ethernet cable through. If I needed to do some work on the machine, I could take it out of the KALLAX compartment and plug a keyboard into the motherboard through a hole on the top of the case.
The drives themselves would be held on a lasercut wooden rack, screwed into two parallel supports. And since it was going to be lasercut, I could do all sorts of fancy cuts to let air vent over and under each drive, with enough space for 6 drives and plenty of ventilation.
After sketching out the design on paper, I moved the parts around in Blender, getting a sense of whether each part was large enough, and whether the system as a whole had any glaring flaws. Yellow is for the motherboard, green is for the drives, red for the power supply, blue for the cooling fans, and everything else is wood.
I starting drawing up laser cut designs on a combination of Inkscape and FreeCAD. FreeCAD has some really proper and powerful design tools (Inkscape is very much “lol precision, what is that?”), but it is obviously, painfully focused on 3D designs instead of 2D. It’s possible to work around (see Appendix A), but I seriously thought about making my own 2D CAD system.
In the middle of my design work, I realized I was ignoring a problem: how in the world do I let one side of the case detach easily, so I could actually access everything? MDF is dense, and a single measly slot might not hold up a big board. And then I realized I never actually went out and looked at the prior art, at what other people were bragging about, so I went on a belated research spree.
It wasn’t encouraging; no one used wood thinner than 1/2 inch, and actively advocated against using 1/4 inch wood. And looking back at the test render, the walls suddenly started looking really thin: this wasn’t over-engineering to overcome my lack of mechanical knowledge, this was the sort of thing that would break every time I wanted to move apartments. The obvious lasercutting options for wood thicker than 1/2 inch were scarce and inconvenient, so that was right out. The more complicated approach of reinforcing the thin wood I was using with ribs and props and flying buttresses wasn’t going to cut it, since I wasn’t going to take a break and spend 2 years learning proper mechanical engineering.
Take 2: Wood you be my 80/20?
So, how could I still build a custom case, but reinforced in such a way that it wouldn’t break with a stern look, not break the bank, and still let me meet my space requirements?
With a half-remembered word, I consulted my roommate; what was that strut-like thing that those FIRST robotics teams would use? The answer came back fast: 80/20, of course. 80/20 is love. 80/20 is life.
It’s square aluminum struts with built in grooves, with the tooling to put the struts together in most any configuration. It sounds simple, but there’s a myriad of options, and it allows fast prototyping while not sacrificing (too much) strength.
And with that, I had a secure skeleton I could build around. I would still use wood paneling for the actual walls, but they would only have to take the weight of a few fans, or a motherboard, or a PSU (the PSU was a bit worrying, being a fair bit heavier than the other components).
There were some sizing issues: using the struts would add dead space around the corners and edges, where I was planning on putting things like fan mount points, which meant that I either couldn’t use the big beautiful 140mm fans I was aiming for, or I would just have to blow my half compartment size budget. I tentatively opted to blow my size budget by 50mm to keep the 140mm fans, but it was painful to leave behind one of my design goals.
However, the real question was what I would do with the hard drives: they weighed in at only around a pound apiece, but if I had 6 drives, then whatever was holding the drives in would need to be strong enough to provide that overengineering margin that an amateur like me really needs. Additionally, I couldn’t just build the drives into the case itself: drives are complicated mechanical monstrosities with a non-trivial failure rate, and it wasn’t worth it to re-build the case each time I added or replaced a drive.
I had a series of wild ideas around using 80/20 as a tool-less drive rail: maybe I could slide the drives directly into the rails! Hmm, no, then I couldn’t support one end of the rail. Maybe I could put a supporting rail to the side? No, now there’s not enough room in the depth dimension. Maybe I could mount the rails… vertically?
After a series of increasingly desperate attempts to not fall back to using the now spindly-looking wood bracket from my 1st pass, I realized that people had to be able to just buy pre-made drive bays. Unfortunately, it turns out most of them are made for specific cases, and so there is basically no information about the dimensions of the available drive bays. After a period of increasingly furious research, I gave up and bought a $10 drive bay (the Corsair 500R), ate the non-slowpoke shipping costs, and measured that drive bay to make sure it could fit in my design (see Appendix B for a rough dimensional drawing; at least you, dear reader, don’t also have to go through this nonsense).
The plan became two drive bays with 3 drive spaces each, one mounted on the top and one mounted on the bottom of the case. Again, I made up a render in Blender to sanity check my design.
This time, I finished making lasercut designs for the wood paneling, using FreeCAD for the complicated fan cuts and inkscape for the simpler panels. The wood paneling would slot into the 80/20 struts, so I designed the vertical panels to let the wood rest on the strut, and not on the corner of the struts, which is why the vertical wood panels have non-symmetrical slot cuts. This resulted in design files, which are linked in Appendix B.
I held off on ordering the struts machined and the panels lasercut, wanting to decide on the electronics before I ordered everything at once.
And once again, scholarship won out over tons of design work: while looking at Brian’s builds again, I discovered I had overlooked a case which I hadn’t turned up in Newegg’s search or my previous look over Brian’s posts, and which fit all my requirements except that it was too wide. However, I was blowing my width budget already with the design I was going to spend lots of money to fabricate myself, so the sane thing was to say goodbye to the design I spent hours and hours on, and buy the pre-fabricated case that fit my needs. And with my insanity budget already blown out of the water by my previous choices, I went and did that.
Take 3: What a Steel!
Now it’s a pretty straightforward computer build. My build list:
Obviously, the Lian Li Q25PC I mentioned earlier.
This was a pretty close choice. The ASRock Rack C2550D4I was a really attractive choice; on sale, it was low budget board with lots of space for RAM and confidence that a passive heatsink was right for the CPU. Plus, the FreeNAS organization uses this board for their machines. However, there were problems with the board: Intel announced in February that boards with the onboard chipset were dying early, after 1.5-2 years of use. I couldn’t get a straight answer from ASRocks Rack, the maker of the board, whether they were proactively replacing their stock: yes, it’s nice you have an extended RMA program specifically for this problem, but I don’t want to go through the hassle of returning my board 1.5-2 years down the road. Pass.
Instead, I found a plain mini-ITX motherboard, which supported ECC RAM and LGA 1151 (the latest Intel CPU socket: it turns out there just aren’t any mini-ITX AMD boards) and supported a reasonable amount of RAM. This board turned out to be the C236 WSI, which I paired with…
The cheapest low power, latest Intel family Celeron. It’s not like I’m going to be doing render jobs and machine learning with this box: that’s the job of my erstwhile gaming rig. I kept the stock cooler: no sense going high end with such a low power chip.
Yeah, I decided to go with error correcting RAM (ECC) (Kingston KVR21E15D8/16). I agree that non-ECC is probably not a problem, that the scrub of death isn’t really a thing. However, it’s not that expensive (keep in mind that ECC-compatible motherboards also cost more), and it’ll be probably years before I can make a properly RAM paranoid filesystem, if I get around to it. For the years I expect the system to stay up, might as well guard against corruption.
Keep in mind that buying ECC RAM is super weird: non-ECC RAM is a pretty whatever proposition, where most everything works with everything else, but not all ECC RAM works with all boards (in the sense that the board won’t boot), and you have to be pretty paranoid about buying the right brand/specific make of RAM. This is even beyond making sure you match the type of RAM, like unbuffered or registered or whatever; for example, I took a long look at the RAM Qualified Vendor List (QVL), and at reviews saying that the RAM worked with my specific board.
I had already picked up the hard drives on sale like a year ago, and they’ve been collecting dust since then. Time to put them to work!
Reviews for the case recommend replacing the default case fans, so I did so. No problems so far.
Of course, the build itself wasn’t easy, even with all off the shelf parts. And in keeping with our theme, the main problem revolved around not doing my research properly.
The mechanical steps of installing all the electronics was pretty straightforward, especially since I sprung for the somewhat larger small case.
Problems cropped up, however, when I tried to boot after putting everything together. Nothing, not even a beep to indicate that the motherboard had finished POSTing. A quick search revealed that I had inadvertently gotten the latest Kaby Lake line of Intel CPUs, which had been released mere months before in January. My motherboard could support Kaby Lake, but it needed a BIOS update in order to work, or it wouldn’t even POST. And of course, you couldn’t install the BIOS update unless you had a last generation chip, a Skylake.
After wondering where I could get a Skylake CPU for a bit, I decided to just take the hit and buy the cheapest Skylake processor I could. After getting the older CPU, though, I discovered I had also omitted plugging in another power cable. After I updated the BIOS, I finally wondered whether the power cable was the problem all along, and Newegg had really gone through and installed the latest BIOS updates on all these boards, and I just didn’t notice; it’s unlikely, but it does make me wonder whether my mediocre review was warranted.
Now we’re back in software land. The first step was burning in the RAM stick with a pass of MemTest86: even though it’s ECC, you still want to stress test it to make sure there aren’t any problems before using it.
Then I installed FreeNAS, a NAS oriented system built on FreeBSD. Again, I didn’t do my research: it turns out that the stable FreeNAS 9.10 builds were not compatible with Kaby Lake, and I had to resort to using an unstable release candidate of FreeNAS 11 in order to finish booting.
So, I the big takeaway lesson is to either buy an older and solid CPU family line, or do a paranoid amount of research to make sure using the latest generation won’t cause problems.
Let’s Use the NAS
So we just put FreeNAS on the box, because it’s an easy default. And now we’re done?
Nah, not yet. It turns out that the world is terrible.
If there’s a big clean “best practices cross-platform networked filesystems guide” out there, I didn’t find it. Instead, I looked at the options that FreeNAS gives you. Hmm, Samba is single threaded? Better not go with that, then. Hmm, the Network File System (NFS) was built for UNIX, and it’s not single threaded? I’ll go with that; surely the world didn’t just decide that networked filesystems aren’t worth any effort and basically haven’t upgraded them since the 1980s.
But surprise! NFS is a networked filesystem that got stuck in the 1980s!
- The authentication mechanism just doesn’t exist. User IDs map directly over to user IDs (user ID 100 on your computer is user ID 100 in NFS), and heaven forbid you assigned different user IDs to your user on different machines. You can map IDs to other IDs, but the scheme just screams “all my mainframes are administered by the same sysadmin”. There’s no way to simply say “require a password”, it’s just have the right user ID or bust.
- There’s no data protection. The requirement that clients connect from an “admin” port (1-1024) again screams “only serious business people have mainframe computers, not every teenager with a live Ubuntu USB drive”. Then, the data isn’t even protected: there’s no encryption in flight, and everything is in plaintext (unless you set up Kerberos, but who has time for that?).
- Not specifically with NFS, but the Linux implementation of NFS will lock up and freeze everything if the connection drops, and continue to freeze even if it’s possible to re-establish the connection. Passing tons of options to NFS somewhat alleviates this, but the entire things screams “what do you mean, the network isn’t built on rock-solid ethernet?”. And if you try and shut down the client Linux box without unmounting the NFS share first, the shutdown process locks up.
We’re not going to do anything about the last problem, but we can definitely work around the first two (authentication/confidentiality) by tunneling the NFS connection over a secure shell (SSH) connection. This is what it takes:
- On the client computer connecting to the NAS, set up an ssh tunnel between your client and the server, both for the NFS server and the mount daemon.
- Hack your FreeNAS configuration, because essential daemons plug their ears and scream “LALALALA” at the top of their lungs if you, again, aren’t connecting from a low numbered port (nfsd, mountd), and the FreeNAS folks haven’t gotten around to adding this functionality as a configuration option in their handy web interface. Unfortunately, this also means that any upgrade would wipe out these changes.
- On the client, pass the usual options to NFS to make sure it’s usable.
- On the client, use autossh to make sure the ssh tunnel can come back up if you have to hop wifi connections.
- (Nice to have) On the NAS, change the NFS server to listen for local connections only (127.0.0.1), since the connections are only coming from the local ssh daemon. This should also cut down on the security surface that can be attacked, since external attackers can’t hit the NFS server.
This neatly solves both problems: ssh has good, strong authentication, so any script kitty walking on a keyboard connected to my network can’t just access the NFS share. And, if my network links are compromised, the attackers can’t just sniff all my data passing over the network.
However, there’s still jank with using NFS; particularly, trying to move more than one file at a time over the link just hangs up and laughs, like trying to do more than one file operation is a foreign concept to a filesystem. Or maybe my ssh tunnel hack forced everything into one pipe, and NFS doesn’t know how to handle that. At any rate, NFS had one job, and it’s doing a not very good job of it.
Okay, so it’s time to declare the situation fubar, so at this point I’m willing to consider SSHFS.
SSHFS is exactly what it sounds like: it’s a networked filesystem built on top of ssh, using the same file transfer protocols as SCP (secure copy) and SFTP (secure file transfer protocol) to transfer files back and forth. It’s a posterchild of FUSE (Filesystem in User SpacE), so surely it must be slow: we know that FUSE needs to make twice as many calls into the OS kernel to actually write a file, which limits performance, which is why I ignored it. It’s just a toy, right?
However, it’s goddamn Space X next to the Apollo program. It’s built on authentication not geared towards the 1980s, the connection is encrypted, it can actually multiplex multiple file operations, you can pass in one obviously named option in order to handle network partitions in a reasonable amount of time, and there’s no visible performance hit.
Usage is as easy as installing and running something like:
sshfs -o reconnect,ServerAliveInterval=5,ServerAliveCountMax=2 HOST:PATH MOUNT_POINT
With HOST, PATH, and MOUNT_POINT all set to reasonable values.
So if you’re using something UNIX-y, skip pretty much every option on display in FreeNAS, and just use SSHFS.
- Spending a few more hours on research can save days of work.
- Buying the latest CPU line a couple months after release is a gamble.
- Use SSHFS for your single user NAS, it beats the pants off the competition.
FreeCAD has a powerful declarative 2D design tool called the Sketcher, but it doesn’t export directly to something like svg or dwg, which most laser cutters take as input. When I designed something in Sketcher and wanted to export it, these are the steps I used (FreeCAD v0.16):
- Create your design in Sketcher, and fully constrain it.
- Pop back out of Sketcher, and go to the Part Design tool.
- Select the sketch object, and choose the Pad task. Pad it to whatever, since we’re going to throw away the depth dimension anyways.
- Go to the Drawing tool, insert a new drawing.
- There are a few options here that I don’t fully understand. You can either a new view of the part, or an orthographic projection. You may need to fiddle around with either to get the right side onto the screen.
- From here, you can export your page to an svg, and if necessary onwards to dwg.
- Take 1, Render: Blender file. github download (1mb).
- Take 2, Render: Blender file. github download (1mb).
- Take 2, Design files: github.
- Corsair 500R Drive Bay Dimensions: dwg files. Github, site zip (23kb).
Appendix C: Paranoid HTTPS
If we’re going to be paranoid about a MITM (Man In The Middle) attack on our filesystem connections, we might as well also secure our access to FreeNAS’s web interface.
- Presumably your NAS is not part of the usual DNS system, in which case you could probably set up Let’s Encrypt instead, and have a proper HTTPS certificate. If you access your NAS with an ip address or hostname you set up or something.local, then you can’t do that.
- Generate a new internal certificate authority (CA) (FreeNAS docs on CAs). You’ll need to fill out some details, like your Country and Organization, but it’s not like anyone is going to see this other than you.
- Generate a new internal certificate. (FreeNAS docs on certs). Again, you’ll need to fill out more details that don’t matter (unless you’re creating a cert signing request, in which case why are you reading this?).
- “Export certificate” for both the CA and the certificate.
- Import the CA certificate into your browser, and then the certificate.
- Switch FreeNAS to accepting both HTTP and HTTPS connections.
- Confirm that you can reach the FreeNAS web interface with HTTPS without certificate errors.
- Switch FreeNAS to HTTPS only; now all HTTP requests will get redirected to HTTPS, so you can’t accidentally login with HTTP.
 ↑ Okay, you got me: there are several different types of RAID, only some of which offer redundancy. For example, RAID 0 just optimizes for speed, and if any one disk fails, the entire array fails.
 ↑ If you spit out your coffee, good job, you have the paranoia chops necessary to live in the 21st century.
 ↑ Full disclosure, I currently work for Google. However, just in case you missed the other messages, I do not speak for my current or past employers.
 ↑ Most of the cost is just fixed costs, for the computing bits, so getting lots of big disks means distributing those fixed costs.
 ↑ Ignoring bandwidth caps COUGHCOMCASTCOUGH, DSL and cable usually have asymmetric download/upload speeds which favoring downloads.
 ↑ The killer features of ZFS (in my opinion): a scrub process allows checking data integrity on a regular basis, instead of finding out things have gone wrong years down the road. RAID-Z allows swapping out drives/growing your array easily. No RAID5/6 write hole resulting in corruption if there’s power loss at the wrong time. Filesystem snapshots allow checkpointing at a regular basis, so accidentally running rm -rf isn’t enough to destroy your data.
The one complaint I have is that the system only uses the lowest common disk size, so mixing 500GB and 5TB drives will waste most of the bigger drive. I’ve heard btrfs has support for heterogeneous disk sizes, but I’m wary about its reputation for simply losing data, and its unstable support for RAID5 functionality.
 ↑ And hence why I named the project KALNAS.
 ↑ Larger fans can spin slower to move the same amount of air, and spinning slower is better for noise. If you must, pattern match small fans to small yappy dogs.
 ↑ I know, Blender isn’t a proper CAD system, and it’s super obvious when trying to do this sort of precision work.
 ↑ Like, you had to ship the wood to the laser cutter, without an option to have it supplied by the cutter. Turnkey, these services weren’t.
 ↑ I didn’t just want to cut a circle in the wood and be done with it, wanting to add at least a little guard in place. I was also thinking about just buying a metal grill when I gave up on this approach.
 ↑ Via errata note AVR.54.
 ↑ I recognize this is still a ASRocks Rack product, but at least it doesn’t have an obvious problem, and I don’t really want to spring a lot more money for a SuperMicro motherboard, which is held to be the other obvious small-form board choice.
 ↑ Interestingly, even the lowest end chips support ECC these days.
 ↑ Unfortunately, the lists tend to be short and/or outdated.
 ↑ By the way, if you need a Skylake Celeron, I might know a guy.
 ↑ Yes, there’s a stronger authentication mechanism, but using Kerberos also screams “all my mainframes are administered by the same sysadmin”
 ↑ Once upon a time, you had to ask someone for permission to bind to a low port. Well, now that everyone has their own computer, that someone is you, and it’s very easy to give permission to yourself.
 ↑ Again, I’m certain I could work around this with the right options, but why in the world do I have to do that?
 ↑ This was particularly problematic for me, because my router is currently having problems.
 ↑ The cons likely revolve around handling multiple users/mapping user IDs, but I don’t care about that for now.
 ↑ In contrast, nearly every other free 2D CAD tool was very declarative: for example, LibreCAD is very much “draw a 2cm here in that direction” instead of a more declarative sort of “I want this line and that line together to have a length of X, and this line should match the length of that other line, so if I tweak that line, this line will also automatically update”.
 ↑ If this is going to touch the internet proper, though, other people will see it, so you might want to fill it out with basically the right information.