Expanding the NAS System

hard drives photo

Well, it looks like I’ve got some new drives coming in for my home built NAS. This will significantly increase the amount of space available and also gives me the chance to geek out a little and play with a newer setup.

Why?

Over the years, I’ve accumulated a bunch of digital content. People sometimes use a bunch of external drives but I’ve always preferred to centralize and for many years, I’ve used a NAS.

Since law school (2006), I’ve used a Linux based system NAS. This was after a catastrophic data loss in my system back when I was living in Austin which were configured just as JBOD. The total available space has of course increased over time, but I’ve consistently used mdraid and XFS as my foundation, with a few years where I incorporated LVM.

In the last few years, I’ve also gotten more into photography and now regularly have several gigabytes of photos to pull off of memory cards every week or three. That and I’ve been getting into film as well. Often times, the output from the digital cameras (Nikon D850 and Leica Q2) and from my Epson V600 aren’t worth keeping, but there are middling keepers that I’ve found can be good material for use later.

I still use custom desktop computers I put together. The current system is seven years old but still performs admirably. It has always used an SSD and that has allowed for significantly greater longevity. Not long ago, I was getting frustrated with its speed. This pain was felt most acutely when importing pictures into Lightroom, which is a taxing process no matter what the computer. But then Adobe followed through with long promised performance updates and the performance has been more acceptable. I also recently got an nVidia GTX 1660 that was a significant upgrade over my older Radeon 7850. So processing power wise, the current setup was sufficient. Don’t get me wrong, Ryzen processors perform substantially better than the Intel i5-4570 Haskell I’m running and I admit we are living in the system’s twilight years. But that’s not the pressure point.

Rather, it is all these pictures. I’ve already upgraded the system’s SSD to provide more space for high-speed system storage. No matter what I do, Lightroom demands that the catalog and the previews are on the local system. So that piece will always remain. But with the NAS I currently have and because the NAS and my desktop are wired to the router, the original RAW files need not stay on the desktop.

The goal of this upgrade was to therefore increase the available drive space and allow for the reliance on the NAS as a central repository.

How?

Over the years, I’ve become very familiar with the capabilities of XFS along with the limitations. It handles small files better now and even deletes files quickly (no, that was once a problem). But one fundamental issue with the design of XFS is that provides no protection against bitrot. Since utilizing a RAID, I practice good data hygiene and scrub regularly. We are, however, now in the 20th century and we have better tools.

For example, now we have filesystems like ZFS and BTRFS that checksum the data while simplifying the software stack so that LVM is no longer needed. I’ve always been curious about ZFS and that filesystem’s alleged ability to avoid bitrot.

But of course, ZFS is incompatible with the Linux GPL license. And Linus seems to dislike it as well. But to be fair, ZFS on Linux is quite successful, and Ubuntu (my favored distro) officially supports ZFS, There’s no practical reason why implementing ZFS on my NAS should be difficult.

In comparison, BTRFS  built into the Linux kernel. BTRFS is certainly not as mature as ZFS which was developed in the last years at Sun Microsystems. Stories of data corruption and data loss haunt the Internet, particularly for RAID5/6. But BTRFS has a killer capability – performing RAID on the chunk level instead of the device level. This allows for devices having different sizes to be used together in a filesystem. For a home NAS, this is an appealing possibility.

Did it Work?

Over the course of the last week, I’ve put the drives in and gotten my system back online. Last weekend I ran some diagnostics to confirm that the drives were of the right size, and then shucked them. Although some people run a full test where they read/write from each block of the drive, I’m too impatient for that. And my planned migration strategy would do a significant amount of that anyway. Don’t forget, you have to disable the 3.3v pin on these shucked drives with some Kaptom tape or other technique.

I pulled one array of drives out of my Fractal Design Node 804 and replaced the 2TB Western Digital Red drives with the new 10TB Western Digital White drives. While I was in there, I rewired and rearranged some things. The SSD now sits up at the front of the case in a built-in holder. I put two of the old 2TB drives at the bottom of the case in the motherboard bay. These two drives will likely become built-in backups for critical data. Given the positioning of the drives, I had to go buy some left-angle connectors so that I could wire them up. But that’s a small price to pay to have these two drives as available storage.

I’ve setup the new array (4x 10TB) as a BTRFS volume using RAID5 for data and RAID1C3 for metadata. One reason I made the plunge at this time was that Linux 5.5 merged BTRFS support for RAID1C3. Given this configuration, my system can survive the loss of one drive, and should there be an issue during the rebuild, another error in metadata. This should be sufficient to rescue data if failure comes for me. For the older array that’s still online (4x 4TB), I’ve migrated the data over the new array. How did I get the data from my other array (4x 2TB) off? I used one of the new 10TB drives to backup the information from the array. After that, I shucked the drives and installed the system. I built the BTRFS array using the other three drives and transferred data from the fourth drive over the last few days. This was a good way to test the system and make sure things worked properly.

Software wise, there have been glitches. BTRFS notoriously complicates the calculation of free space. Several days ago, my array was happily humming along. The array ran into free space issues when I began copying the my photo archives onto the system. Apparently this was once upon a time an issue with BTRFS but had been largely corrected. Except that there were some issues in kernel 5.5. Fortunately, Ubuntu allows you to install mainline kernels if you’re crazy so I’ve gone down this route until the kernel is officially supported.

Overall the array performs at least as fast as the old XFS setup, if not faster. Lightroom has taken to the NAS quite nicely and so far I’m not noticing any performance loss from storing originals on the NAS. Here’s hoping this setup doesn’t end up being too fragile.

What will I do with the older drives and array? The 4x 4TB array will probably be reformatted in the near future to also use BTRFS. I’m hesitant to bring these into the other array since 1) those drives are older and 2) additional drives would just increase the likelihood that the overall array (e.g., the 4x 4TB drives and the 4x 10TB drives) would fail. So I’ll probably segregate that data for certain types of content.

The 2x 2TB drives? Those will be live backup drives I think, maybe in RAID1. Or maybe not. If they’re straight up backups then who cares. That leaves 2x 2TB drives that are outside of the case. I’m not sure what’ll happen to those.

The dorkness will continue…

Rebooting in the New Decade

New Beginnings
Photo by Thomas Hawk

I used to have a blog, and even my own domain name. I’m not even sure what the domain name was at this point.

At first, I hacked together my own proprietary blog system in PHP. I remember rushing it out near the end of my senior year because it seemed like it was important. I recall it being serviceable but I don’t recall what features it had. And even though it had 100% customer satisfaction, it also had extremely low penetration in the blogging market. To be honest, I bet my little system wasn’t that impressive even though it definitely implemented a handcrafted RSS system. Artisanal even. Unfortunately for the world economy, no VC company brought my nascent company to the masses.

While I was nurturing my little creation, I remember seeing early versions of WordPress and being mildly interested. I was still a programmer back then and valued the ability to say I hand crafted my blog for all ten visitors a year. I don’t recall if I ever redeployed my blog in WordPress but I’m fairly sure I did because I remember cobbling together something to dump my posts database. After a while, I stopped posting to the site. Not when social networks were calling for us to join up.

Friendster came out shortly after college. Compared to hosting your own blog, this was far easier to use and all our friends were already there. It seemed like the perfect idea. But Friendster itself constantly had problems. So on we moved to the next fad, Myspace. The eyesore of a social network was a bustling online community with bands and celebrities bringing in the traffic. Everyone was friends with Tom. But things would change and Myspace’s downfall was coming soon. The masses depopulated for the cleaner vistas of Facebook. But then, on September 6, 2006, Facebook created the News Feed. I still remember the first day the News Feed popped up and how intrusive it seemed. But at the same time, the voyeur in me couldn’t resist knowing the instant the cute girl in class in front of me liked someone else’s picture. All of this was pre-Farmville and Cambridge Analytica. And over time, Facebook too lost its innocence to the point where many of us have essentially abandoned our profiles.

With the coming of social media, all across the Internet, communities withered and died. Those that have survived are very different now. LiveJournal is a shadow of its former self. Blogger still exists but the community there has mostly left. Same for many of the forums from earlier in the century (e.g., General Mayhem, Something Awful). Instead, we spend our time on Twitter, Facebook, and Reddit. The very glue that held together the blogosphere, RSS, is less and less common to see in these days. Twitter, which used to offer an RSS feed, does not anymore. Reddit still offers RSS as well as Hacker News as of this writing so perhaps there’s hope in this second decade of the 21st century even though much of the world has abandoned the idea of decentralization.

Google, owned by parent company Alphabet, is by far the biggest media owner in the world and attracted $79.4bn (£61.5bn) in ad revenues in 2016, three times more than the second-largest, Facebook, which pulled in $26.9bn, according to Zenith. The previous year, Alphabet took $67.4bn of ad revenues and Facebook $17.1bn.

https://www.theguardian.com/media/2017/may/02/google-and-facebook-bring-in-one-fifth-of-global-ad-revenue

But Google’s goal of becoming the world’s biggest advertising platform wasn’t what really destroyed the early and innocent web. We did that because we wanted to do what was easy.

This is my attempt to try and go in the opposite direction and have a place where I can write things down in longer form. And perhaps more. I don’t know how consistent I’ll be with it. But I won’t know if I don’t try right?