A Eulogy to the Electric Objects E01

Electric Objects is discontinued.

The day finally arrived. Years after the demise of Electric Objects as a company, my E01 died. This is my remembrance of a long gone hardware startup’s quirky piece of hardware that grew into a treasured window on life.

I first heard about the E01 from Kickstarter, back when crowdsourcing was relatively novel and there were more interesting hardware projects. The idea was good: using a large screen to display digital art. As their marketing for the project said, the E01 was “a computer designed to bring the beauty of the Internet into your home.” Indeed, Electric Objects wanted to create a new way to experience art in one’s home.

The sleek and clean look of the hardware embodied what I’d wanted for a while. And this was far simpler than what I had dreamed of. I’d often imagined making something similar with a Raspberry Pi. But the software didn’t exist yet so that would need building. And there wasn’t a way I could make the hardware look this good. Here, someone had done all the work necessary needed and created a device with a different aspect ratio which made for interesting presentation of images. So, I happily pledged for my device at the Kickstarter price.

Kickstarter discount deal from Kickstarter.

Electric Objects dutifully delivered on the E01. Looking back, this was a small miracle in of itself. And now, I had a nice app I could use to display a library of digital art in my study easily using a sleek display. And the system included the ability to display my own photography. I was a very happy camper. And things seemed positive overall. The company was successful enough to create a successor, the E02. I didn’t upgrade because the E01 served my purposes perfectly. But the creation of a successor device suggested a viable business existed, at least to this outside observer.

But suddenly, in 2017, Electric Objects shut its hardware business down and sold the app to Giphy. The company would become a cautionary tale about the risks associated with hardware startups. And eventually, Jack Levine, the founder, reminisced about why Electric Objects failed. But the death of the company didn’t kill the device. The E01 continued to serve its purpose. But how long could this device continue to work?

The last day of service, according to Reddit, was June 28, 2023.

It was remarkable service lasted this long. And it was remarkable that the hurdles along the way didn’t kill the service before.

The first hurdle was the lack of app updates after the company’s death. iOS application developers need to keep up with Apple’s API changes or users won’t have access to the app in later iOS versions. Fortunately, throughout the E01’s life, Apple allowed the old iPad to continue receiving photo updates. But the app itself ran afoul of Apple’s guidelines. In short order, I needed to keep an old iPad around running iOS 12 to have a working copy of the Electric Objects app. Whatever changed in iOS 13 prevented the app from running there. This was fine, until my iPad decided to free up space and remove the E01 app. Since the company didn’t exist anymore, the App Store took away downloads of the app. I eventually found a way to reload the app onto the old iPad, and usage was restored. Later, I discovered the website itself included nearly all the app’s functionality.

Other hurdles were outside of my realm of control. There’d be intermittent issues with the server infrastructure, which would cause the screen to freeze or fail to load new content. Indeed, before its final days, my E01 had experienced glitches where the images didn’t rotate. But inevitably, some helpful soul would fix things and restore service. I like to imagine that someone had a computer hiding under their desk hosting all of the content. But we may never know.

Aside from software issues, the hardware itself struggled against time. Last year, my screen started experiencing some strange flickering in the corner, and then refused to boot. Several hours of research later, I placed an Amazon order for a replacement AC adapter. Someone had explained the AC adapter originally included with the device could fail and result in a boot loop. And this gave the E01 another span of faithful service.

I wasn’t the only one still loving the device. Through the years, a small subreddit of other E01 and E02 users coalesced and provided solutions to the problems I encountered. Although the community was small, we persisted.

With its recent final death, I come to realize how important the E01 had become. I initially displayed images curated by Electric Objects. They included interesting digital art, some with subtle movement details, that utilized the digital frame in an interesting way. This made for a nice conversation piece when people came over. But life changed. My new wife lamented the use of space for a display of “weird pictures,” so pictures of us started appearing. Thank you Apple for continuing to allow old iOS versions receive picture updates. This calmed her purging urges.

Then the pictures became pictures of our family with the arrival of my daughter. Instead of being a lamented use of space, the E01 became a favorite piece displaying pictures of her growth. And here, the unique aspect ratio of the E01 helped highlight some of the library of pictures of her growth in a different and interesting way. And as she grew up, my daughter came to recognize pictures of herself at a much younger age.

Now, with the death of Electric Objects, our family’s digital picture frame is gone.

Goodbye old friend, my E01. It was great while it lasted. Thank you Jack Levine along with the whole Electric Objects team for bringing the E01 to the world. Thank you to the anonymous people who kept the infrastructure running. And thank you Apple for continuing to give iOS 12 photo updates.

Extreme Self-Hosting Because What is “Always Free” Anyway?

This is a strange timeline. Just a little bit over a year ago, I wrote about how I recently shifted some personal service onto Oracle’s Cloud, which had the most generous free tier available. I did this because self-hosting my own infrastructure seemed silly.

A few days ago, while filtering out spam, I notice an email from Oracle where they deem my two VM instances as idling because, well, I’ll just show you.

Oracle's new idling limitations, the impetus for my latest self-hosting adventure.
Oracle’s new terms. (February 4, 2023)

Eesh. Well, Oracle is often times referred to as the IT world’s Darth Vader after suing Google over Android, so this isn’t surprising. Fortunately this portrayal allows me to use this meme.

I am altering the deal. Pray I do not alter it any further.
Oracle’s standard negotiation strategy.

My recourse? Adding a credit card and switching my account from a truly free tier to one where Oracle can bill me if/when I exceed the free usage limitations. Ok, that’s fine, but I also don’t particularly like entities holding my credit card information when I’m just trying to use their free service. And as you the reader might appreciate, avoiding Oracle’s definition of idling requires something to chew up more CPU or network utilization than this small little WordPress blog and the publicly facing personal services I run. Certainly WordPress is a large and complicated CMS in this day and age, but the amount of web traffic required to meet either of the CPU or network utilization thresholds vastly exceeds what I’ve ever seen. Shocking, right?

On one hand, this policy is understandable. Reclaiming the instances in this manner allows for reallocation of my rounding-error of a workload throughout whatever data center they live in and there’s great reasons for this. And on occasion, Oracle has moved my instances onto different hardware. Downtime is acceptable at this price so this never caused any concern. Indeed, that’s just how hosting works sometimes. This is one reason I didn’t start with self-hosting. Hardware continues to break despite our best efforts. And since my self-hosting needs are so light, it didn’t make sense to use my hardware.

The wrinkle was that not all instances were equally easy to obtain. Oracle’s Ampere A1 instances (VM.Standard.A1.Flex) were substantially faster than the AMD instances (VM.Standard.E2.1.Micro. Substantially faster in that I perceived the speed difference just serving this website! Being superior, these A1 instances are difficult to obtain. Indeed, I couldn’t get one when I first signed up in November 2021 and didn’t have one when I blogged about it. Only in March 2022 did I luck into one when just playing around with Oracle’s console. After just playing around some and noticing the speed difference just moving around the system, I migrated most of my compatible services there. My speed indicators jumped immediately, boosting my PageRank and bringing the riches of readership that I now enjoy.

Even using the much slower AMD instances, I can’t hit Oracle’s thresholds. I’m using a well liked and likely very efficient software stack to serve traffic, and have taken precautions to filter traffic through Cloudflare. Even if I removed their services, the traffic wouldn’t cause even a poorly thought out and inefficient software stack to meet Oracle’s thresholds. And to be frank, tinkering on the AMD instances wasn’t fun. They were substantially slower than anything I’ve used in recent memory, including an HP ProLiant Micro Server N40L.

The answer, after some noodling, was simply to move those services back onto my own hardware. Yes, extreme self-hosting. I already host some personal stuff at home using a Celeron G3900 server living inside a Fractal Design Node 804 case. Here too, the CPU load isn’t too high since it mostly maintains the spinning rust the case houses. But the trick is that my ISP most likely filters or blocks the traffic on ports 80 and 443 to discourage hosting. And these publicly facing services are web ones, so those are key.

Enter Cloudflare’s Tunnels which have become all the rage in some subreddits. No publicly routable IP address is necessary and instead I run a daemon on the system to create an outbound connection to Cloudflare’s infrastructure. Here’s a picture from Cloudflare themselves illustrating this configuration.

Cloudflare's handy diagram showing how Cloudflare Tunnel facilitates self-hosting.
Cloudflare’s helpful diagram.

Yes, that means I’m i) executing someone else’s code on my system at the system level to ii) explicitly introduce a middleman between myself and my traffic. But this isn’t different from trusting any other software on my server (e.g., Docker in general, my ad-filtering DNS server), and for my public-facing infrastructure, I’ve already employed Cloudflare to filter out malicious traffic. So employing Cloudflare’s tunnel only slightly changes the security profile while giving the benefit of self-hosting these small, and apparently, unwanted workloads.

So over the course of a few days, I look into what’s necessary to run Cloudflare’s daemon on my hardware. I added their Docker image to my compose file, made some tweaks, and fired things up. After fiddling with a few settings, everything started right up like it did before. Caddy’s configuration didn’t change at all because of the migration. Indeed, none of the containers needed tweaking aside from shifting everything from the default network to a specific network associated with the Cloudflare daemon. And by employing Docker, at least I have some certainty that should someone find a way into the system (e.g., through a supply chain attack on Cloudflare for example), access is restricted by the Docker engine. And I’m still running other Docker containers separately to host services that aren’t public facing.

So instead of relying on a cloud provider to host the infrastructure, I’ve brought it in house. Decentralization is the theme for 2023 it seems, and self-hosting is the way forward.

RAID Isn’t a Backup and Shucking Has Risks

A saying so old that it has its own website and is heavily mentioned in forums filled similarly interested individuals. And it has even been meme’d.

RAID isn't backup meme.
(not my work, just one of many immediate results)

So last month, as I was processing more pictures from the latest play date, I noticed several pictures in the main archive weren’t loading. Strange, but I didn’t pay any mind. See, my current Lightroom setup is a repurposed Thinkpad T470 from several years ago that just happens to have a ton of RAM. But the little i5 inside? It is a poor little 6200U. Skylake, but low voltage and from 2015. And the thermals on the Lenovo aren’t made for Lightroom—the poor little thing overheats if I run it with the lid closed. This all happened because my desktop started glitching badly and refusing to boot. It is either the memory or the video card and I don’t have the spare hardware to isolate it. And my 2014 vintage Mac Mini is similarly limited processing power wise, and even more so RAM wise.

The crazy part is of all these old computers, it was my desktop with a separate GPU that couldn’t power the Dell U3818DW at its native resolution, necessitating a GPU upgrade a few years ago. But to be honest, you can tell the laptop is hurting (mostly through graphics artifacts/glitching). The Mac Mini, however, somehow pumps out the video for both the 38″ Dell and my 24″ Asus, enabling a gigantic dual monitor setup that I hope to continue once I upgrade.

But back to the RAID issue.

As seen a few years ago, I migrated to a BTRFS setup. One big motivation was to avoid bitrot. XFS certainly didn’t have that capability, and over the years I have noticed a few glitches in my data. So even though I didn’t invest in ECC, my thinking was at least the filesystem should know about such things, and if possible, fix them.

So as the poor little Lenovo tries to render 1:1 previews for the latest picture load, I notice a few of the older pictures weren’t loading. I move most of the pictures over to the NAS to minimize the pictures stored on the laptop alone, so sometimes it is just the WiFi. But not this time.

Indeed, later on, I look in the NAS logs and I see crc errors. Lots of them. Some digging around in the btrfs device stats showed one of the drives in my RAID5 array giving me a ton of corruption errors. So I immediately run a scrub on the array to try and fix the data. Since this btrfs volume is setup with RAID5 data and RAID1C3 metadata, I wasn’t too concerned about the array crashing. But I wanted to get to the bottom of this.

Eventually, the scrub hits the bad file and fixes up the file. But as the scrub keeps running, it finds more and more errors. Which is a bit unsettling and now making me wonder how stable the array really is. A few more scrubs later and I notice the errors are transient as well. Sometimes the drive spits out tons of errors and sometimes it is humming along just fine. So because it is configured with RAID5 data, I ran a scrub on each of the individual devices. In general, RAID5 and btrfs comes with significant warnings because of a variety of issues documented on the development page. Scrubbing the individual devices in the RAID5 is one of the guidelines from the developers. And who am I to question them. But if you too are venturing down the btrfs RAID5 path, read through that email from the developers because it sets the ground rules, and foundational expectations.

Hrmm, more errors in the array, but all concentrated on one drive. Well, that’s good I guess. But then, weirdly, another one of the arrays in the NAS starts glitching. That’s odd too. By now, I’m having flashbacks to my desktop system glitching out with weird rebooting loops. So I dig the server out from its home at the bottom of the shelf and give it a good vacuum before I go in and take a look to see what’s going on. Since the errors are concentrated on one drive, I pull the problematic drive out and immediately see the problem—a loose piece of tape on the SATA connector.

Huh? Tape? What?

You see, the 10TB drives I rebuilt the array with were shucked. What’s that? Well, when you’re a hard drive manufacturer, you want to charge extra for those dorks that are trying to build a home NAS. You slap some extra warranty on the drive, and then you juice the price. But you still have to sell to the unwashed masses. So you take some drives and you throw them in a plastic case and sell those. But the unwashed masses have highly elastic demand, so you gotta cut your price to meet this month’s sales targets. So you shuck some external drives, take their internals, and use them in your NAS. But WD, particularly for their white label drives, decided to implement something slightly different.

In the new SATA specification, the ability to disable power to the hard drive is included. More particularly, this changes the behavior for the third SATA pin, which was previously tied to P1 and P2, according to Toms Hardware. And WD’s own documentation confirms this new aspect of the SATA standard. But fortunately, the Internet has found the fix: a piece of tape.

When I pulled off the SATA controller, the tape was wrinkled and not really blocking the third SATA pin properly. After a few minutes, I reapplied the tape and slapped the drive back into the system. And while I was in there, I checked all the cables on all the drives. The SFF-8087 to SATA cables I got when I built the server were barely long enough, so the tension is a little tight.

After closing up and booting back up, all the drives came online which was a good sign. And afterwards, I ran a series of scrubs on all arrays. No more errors! And Lightroom is happily pulling up pictures from the NAS. Just slowly, very slowly.

It’s been about a month now since that crop of errors and things have worked smoothly since. Knock on wood. Glad these old drives are still working well and have plenty of space for more play date pictures!