Keyboard Bliss?

As part of the standard deployment of equipment at work, they gave me a traditional purely flat keyboard. It probably came with the docking station for the admittedly quite nice Lenovo X1 I have to work with. But the standard keyboard is nothing special.

The firm issued input device set

So I went out and spent some of my own money to at least improve the keyboard. The mouse might need swapping soon too because the mouse wheel is acting a little wonky, but that’s a later problem. But where to turn for the new keyboard?

Given the amount of typing I do at work, I decided that I wanted an ergonomic keyboard of some type instead of the straight keyboard I use at home and at work. I tried out Logitech’s new K860 Ergo keyboard at Best Buy but I didn’t like the amount of travel the keys had. Not that I’m some mechanical keyboard connoisseur, but I do want a little bit more travel than that if I’m typing for extended periods. Although I’m lucky enough to live near several Microcenter stores, I did not go and try out any other ergonomic keyboards because right next to the Logitech was a cheap and pretty acceptable solution: Microsoft’s Surface Ergonomic Keyboard.

It allegedly has a slightly different curve and annoyingly moves the Menu button over to the left to make room for an Office and emoji button. Which are useless in my work configuration because I cannot install any special software to enable these buttons. I knew that going in and it has been an annoying change. But after three weeks with the keyboard, I think this periodic annoyance is worth the improved typing comfort. And the angle is appropriate for both sitting and standing typing positions. I’ve found myself standing to work more often as a result.

Not bad for about $45.

Aliens and Getting Old

Earlier this week, Seti@Home announced that they would no longer distribute further work (forum link). It has been a long time since I ran any distributed computing client – from early ones attempting to crack a 56-bit key (completed 1997) to Seti@Home for various groups. I think I stopped running these programs around the time I had to pay my own power bills. Coincidence I’m sure.

I certainly wasn’t as passionate as some who would overclock their main systems, run their systems 24/7 to grind out new blocks, or cobble together multiple junked systems to create a working one, but they were still fun projects to run for a period, particularly when my Pentium Pro was the fastest system on campus. Thanks Dad for encouraging the nerdiness and/or for fixing my nerd cred after coming to school with just a 486DX2 (circa 1996). Harking back to a simpler time when you had to run a wire down the hall to establish a network. And had to remember the SLI’s patrol schedule so that you could pull the cable back into the room before they noticed!

Expanding the NAS System

hard drives photo

Well, it looks like I’ve got some new drives coming in for my home built NAS. This will significantly increase the amount of space available and also gives me the chance to geek out a little and play with a newer setup.

Why?

Over the years, I’ve accumulated a bunch of digital content. People sometimes use a bunch of external drives but I’ve always preferred to centralize and for many years, I’ve used a NAS.

Since law school (2006), I’ve used a Linux based system NAS. This was after a catastrophic data loss in my system back when I was living in Austin which were configured just as JBOD. The total available space has of course increased over time, but I’ve consistently used mdraid and XFS as my foundation, with a few years where I incorporated LVM.

In the last few years, I’ve also gotten more into photography and now regularly have several gigabytes of photos to pull off of memory cards every week or three. That and I’ve been getting into film as well. Often times, the output from the digital cameras (Nikon D850 and Leica Q2) and from my Epson V600 aren’t worth keeping, but there are middling keepers that I’ve found can be good material for use later.

I still use custom desktop computers I put together. The current system is seven years old but still performs admirably. It has always used an SSD and that has allowed for significantly greater longevity. Not long ago, I was getting frustrated with its speed. This pain was felt most acutely when importing pictures into Lightroom, which is a taxing process no matter what the computer. But then Adobe followed through with long promised performance updates and the performance has been more acceptable. I also recently got an nVidia GTX 1660 that was a significant upgrade over my older Radeon 7850. So processing power wise, the current setup was sufficient. Don’t get me wrong, Ryzen processors perform substantially better than the Intel i5-4570 Haskell I’m running and I admit we are living in the system’s twilight years. But that’s not the pressure point.

Rather, it is all these pictures. I’ve already upgraded the system’s SSD to provide more space for high-speed system storage. No matter what I do, Lightroom demands that the catalog and the previews are on the local system. So that piece will always remain. But with the NAS I currently have and because the NAS and my desktop are wired to the router, the original RAW files need not stay on the desktop.

The goal of this upgrade was to therefore increase the available drive space and allow for the reliance on the NAS as a central repository.

How?

Over the years, I’ve become very familiar with the capabilities of XFS along with the limitations. It handles small files better now and even deletes files quickly (no, that was once a problem). But one fundamental issue with the design of XFS is that provides no protection against bitrot. Since utilizing a RAID, I practice good data hygiene and scrub regularly. We are, however, now in the 20th century and we have better tools.

For example, now we have filesystems like ZFS and BTRFS that checksum the data while simplifying the software stack so that LVM is no longer needed. I’ve always been curious about ZFS and that filesystem’s alleged ability to avoid bitrot.

But of course, ZFS is incompatible with the Linux GPL license. And Linus seems to dislike it as well. But to be fair, ZFS on Linux is quite successful, and Ubuntu (my favored distro) officially supports ZFS, There’s no practical reason why implementing ZFS on my NAS should be difficult.

In comparison, BTRFS  built into the Linux kernel. BTRFS is certainly not as mature as ZFS which was developed in the last years at Sun Microsystems. Stories of data corruption and data loss haunt the Internet, particularly for RAID5/6. But BTRFS has a killer capability – performing RAID on the chunk level instead of the device level. This allows for devices having different sizes to be used together in a filesystem. For a home NAS, this is an appealing possibility.

Did it Work?

Over the course of the last week, I’ve put the drives in and gotten my system back online. Last weekend I ran some diagnostics to confirm that the drives were of the right size, and then shucked them. Although some people run a full test where they read/write from each block of the drive, I’m too impatient for that. And my planned migration strategy would do a significant amount of that anyway. Don’t forget, you have to disable the 3.3v pin on these shucked drives with some Kaptom tape or other technique.

I pulled one array of drives out of my Fractal Design Node 804 and replaced the 2TB Western Digital Red drives with the new 10TB Western Digital White drives. While I was in there, I rewired and rearranged some things. The SSD now sits up at the front of the case in a built-in holder. I put two of the old 2TB drives at the bottom of the case in the motherboard bay. These two drives will likely become built-in backups for critical data. Given the positioning of the drives, I had to go buy some left-angle connectors so that I could wire them up. But that’s a small price to pay to have these two drives as available storage.

I’ve setup the new array (4x 10TB) as a BTRFS volume using RAID5 for data and RAID1C3 for metadata. One reason I made the plunge at this time was that Linux 5.5 merged BTRFS support for RAID1C3. Given this configuration, my system can survive the loss of one drive, and should there be an issue during the rebuild, another error in metadata. This should be sufficient to rescue data if failure comes for me. For the older array that’s still online (4x 4TB), I’ve migrated the data over the new array. How did I get the data from my other array (4x 2TB) off? I used one of the new 10TB drives to backup the information from the array. After that, I shucked the drives and installed the system. I built the BTRFS array using the other three drives and transferred data from the fourth drive over the last few days. This was a good way to test the system and make sure things worked properly.

Software wise, there have been glitches. BTRFS notoriously complicates the calculation of free space. Several days ago, my array was happily humming along. The array ran into free space issues when I began copying the my photo archives onto the system. Apparently this was once upon a time an issue with BTRFS but had been largely corrected. Except that there were some issues in kernel 5.5. Fortunately, Ubuntu allows you to install mainline kernels if you’re crazy so I’ve gone down this route until the kernel is officially supported.

Overall the array performs at least as fast as the old XFS setup, if not faster. Lightroom has taken to the NAS quite nicely and so far I’m not noticing any performance loss from storing originals on the NAS. Here’s hoping this setup doesn’t end up being too fragile.

What will I do with the older drives and array? The 4x 4TB array will probably be reformatted in the near future to also use BTRFS. I’m hesitant to bring these into the other array since 1) those drives are older and 2) additional drives would just increase the likelihood that the overall array (e.g., the 4x 4TB drives and the 4x 10TB drives) would fail. So I’ll probably segregate that data for certain types of content.

The 2x 2TB drives? Those will be live backup drives I think, maybe in RAID1. Or maybe not. If they’re straight up backups then who cares. That leaves 2x 2TB drives that are outside of the case. I’m not sure what’ll happen to those.

The dorkness will continue…