A Eulogy to the Electric Objects E01

Electric Objects is discontinued.

The day finally arrived. Years after the demise of Electric Objects as a company, my E01 died. This is my remembrance of a long gone hardware startup’s quirky piece of hardware that grew into a treasured window on life.

I first heard about the E01 from Kickstarter, back when crowdsourcing was relatively novel and there were more interesting hardware projects. The idea was good: using a large screen to display digital art. As their marketing for the project said, the E01 was “a computer designed to bring the beauty of the Internet into your home.” Indeed, Electric Objects wanted to create a new way to experience art in one’s home.

The sleek and clean look of the hardware embodied what I’d wanted for a while. And this was far simpler than what I had dreamed of. I’d often imagined making something similar with a Raspberry Pi. But the software didn’t exist yet so that would need building. And there wasn’t a way I could make the hardware look this good. Here, someone had done all the work necessary needed and created a device with a different aspect ratio which made for interesting presentation of images. So, I happily pledged for my device at the Kickstarter price.

Kickstarter discount deal from Kickstarter.

Electric Objects dutifully delivered on the E01. Looking back, this was a small miracle in of itself. And now, I had a nice app I could use to display a library of digital art in my study easily using a sleek display. And the system included the ability to display my own photography. I was a very happy camper. And things seemed positive overall. The company was successful enough to create a successor, the E02. I didn’t upgrade because the E01 served my purposes perfectly. But the creation of a successor device suggested a viable business existed, at least to this outside observer.

But suddenly, in 2017, Electric Objects shut its hardware business down and sold the app to Giphy. The company would become a cautionary tale about the risks associated with hardware startups. And eventually, Jack Levine, the founder, reminisced about why Electric Objects failed. But the death of the company didn’t kill the device. The E01 continued to serve its purpose. But how long could this device continue to work?

The last day of service, according to Reddit, was June 28, 2023.

It was remarkable service lasted this long. And it was remarkable that the hurdles along the way didn’t kill the service before.

The first hurdle was the lack of app updates after the company’s death. iOS application developers need to keep up with Apple’s API changes or users won’t have access to the app in later iOS versions. Fortunately, throughout the E01’s life, Apple allowed the old iPad to continue receiving photo updates. But the app itself ran afoul of Apple’s guidelines. In short order, I needed to keep an old iPad around running iOS 12 to have a working copy of the Electric Objects app. Whatever changed in iOS 13 prevented the app from running there. This was fine, until my iPad decided to free up space and remove the E01 app. Since the company didn’t exist anymore, the App Store took away downloads of the app. I eventually found a way to reload the app onto the old iPad, and usage was restored. Later, I discovered the website itself included nearly all the app’s functionality.

Other hurdles were outside of my realm of control. There’d be intermittent issues with the server infrastructure, which would cause the screen to freeze or fail to load new content. Indeed, before its final days, my E01 had experienced glitches where the images didn’t rotate. But inevitably, some helpful soul would fix things and restore service. I like to imagine that someone had a computer hiding under their desk hosting all of the content. But we may never know.

Aside from software issues, the hardware itself struggled against time. Last year, my screen started experiencing some strange flickering in the corner, and then refused to boot. Several hours of research later, I placed an Amazon order for a replacement AC adapter. Someone had explained the AC adapter originally included with the device could fail and result in a boot loop. And this gave the E01 another span of faithful service.

I wasn’t the only one still loving the device. Through the years, a small subreddit of other E01 and E02 users coalesced and provided solutions to the problems I encountered. Although the community was small, we persisted.

With its recent final death, I come to realize how important the E01 had become. I initially displayed images curated by Electric Objects. They included interesting digital art, some with subtle movement details, that utilized the digital frame in an interesting way. This made for a nice conversation piece when people came over. But life changed. My new wife lamented the use of space for a display of “weird pictures,” so pictures of us started appearing. Thank you Apple for continuing to allow old iOS versions receive picture updates. This calmed her purging urges.

Then the pictures became pictures of our family with the arrival of my daughter. Instead of being a lamented use of space, the E01 became a favorite piece displaying pictures of her growth. And here, the unique aspect ratio of the E01 helped highlight some of the library of pictures of her growth in a different and interesting way. And as she grew up, my daughter came to recognize pictures of herself at a much younger age.

Now, with the death of Electric Objects, our family’s digital picture frame is gone.

Goodbye old friend, my E01. It was great while it lasted. Thank you Jack Levine along with the whole Electric Objects team for bringing the E01 to the world. Thank you to the anonymous people who kept the infrastructure running. And thank you Apple for continuing to give iOS 12 photo updates.

Extreme Self-Hosting Because What is “Always Free” Anyway?

This is a strange timeline. Just a little bit over a year ago, I wrote about how I recently shifted some personal service onto Oracle’s Cloud, which had the most generous free tier available. I did this because self-hosting my own infrastructure seemed silly.

A few days ago, while filtering out spam, I notice an email from Oracle where they deem my two VM instances as idling because, well, I’ll just show you.

Oracle's new idling limitations, the impetus for my latest self-hosting adventure.
Oracle’s new terms. (February 4, 2023)

Eesh. Well, Oracle is often times referred to as the IT world’s Darth Vader after suing Google over Android, so this isn’t surprising. Fortunately this portrayal allows me to use this meme.

I am altering the deal. Pray I do not alter it any further.
Oracle’s standard negotiation strategy.

My recourse? Adding a credit card and switching my account from a truly free tier to one where Oracle can bill me if/when I exceed the free usage limitations. Ok, that’s fine, but I also don’t particularly like entities holding my credit card information when I’m just trying to use their free service. And as you the reader might appreciate, avoiding Oracle’s definition of idling requires something to chew up more CPU or network utilization than this small little WordPress blog and the publicly facing personal services I run. Certainly WordPress is a large and complicated CMS in this day and age, but the amount of web traffic required to meet either of the CPU or network utilization thresholds vastly exceeds what I’ve ever seen. Shocking, right?

On one hand, this policy is understandable. Reclaiming the instances in this manner allows for reallocation of my rounding-error of a workload throughout whatever data center they live in and there’s great reasons for this. And on occasion, Oracle has moved my instances onto different hardware. Downtime is acceptable at this price so this never caused any concern. Indeed, that’s just how hosting works sometimes. This is one reason I didn’t start with self-hosting. Hardware continues to break despite our best efforts. And since my self-hosting needs are so light, it didn’t make sense to use my hardware.

The wrinkle was that not all instances were equally easy to obtain. Oracle’s Ampere A1 instances (VM.Standard.A1.Flex) were substantially faster than the AMD instances (VM.Standard.E2.1.Micro. Substantially faster in that I perceived the speed difference just serving this website! Being superior, these A1 instances are difficult to obtain. Indeed, I couldn’t get one when I first signed up in November 2021 and didn’t have one when I blogged about it. Only in March 2022 did I luck into one when just playing around with Oracle’s console. After just playing around some and noticing the speed difference just moving around the system, I migrated most of my compatible services there. My speed indicators jumped immediately, boosting my PageRank and bringing the riches of readership that I now enjoy.

Even using the much slower AMD instances, I can’t hit Oracle’s thresholds. I’m using a well liked and likely very efficient software stack to serve traffic, and have taken precautions to filter traffic through Cloudflare. Even if I removed their services, the traffic wouldn’t cause even a poorly thought out and inefficient software stack to meet Oracle’s thresholds. And to be frank, tinkering on the AMD instances wasn’t fun. They were substantially slower than anything I’ve used in recent memory, including an HP ProLiant Micro Server N40L.

The answer, after some noodling, was simply to move those services back onto my own hardware. Yes, extreme self-hosting. I already host some personal stuff at home using a Celeron G3900 server living inside a Fractal Design Node 804 case. Here too, the CPU load isn’t too high since it mostly maintains the spinning rust the case houses. But the trick is that my ISP most likely filters or blocks the traffic on ports 80 and 443 to discourage hosting. And these publicly facing services are web ones, so those are key.

Enter Cloudflare’s Tunnels which have become all the rage in some subreddits. No publicly routable IP address is necessary and instead I run a daemon on the system to create an outbound connection to Cloudflare’s infrastructure. Here’s a picture from Cloudflare themselves illustrating this configuration.

Cloudflare's handy diagram showing how Cloudflare Tunnel facilitates self-hosting.
Cloudflare’s helpful diagram.

Yes, that means I’m i) executing someone else’s code on my system at the system level to ii) explicitly introduce a middleman between myself and my traffic. But this isn’t different from trusting any other software on my server (e.g., Docker in general, my ad-filtering DNS server), and for my public-facing infrastructure, I’ve already employed Cloudflare to filter out malicious traffic. So employing Cloudflare’s tunnel only slightly changes the security profile while giving the benefit of self-hosting these small, and apparently, unwanted workloads.

So over the course of a few days, I look into what’s necessary to run Cloudflare’s daemon on my hardware. I added their Docker image to my compose file, made some tweaks, and fired things up. After fiddling with a few settings, everything started right up like it did before. Caddy’s configuration didn’t change at all because of the migration. Indeed, none of the containers needed tweaking aside from shifting everything from the default network to a specific network associated with the Cloudflare daemon. And by employing Docker, at least I have some certainty that should someone find a way into the system (e.g., through a supply chain attack on Cloudflare for example), access is restricted by the Docker engine. And I’m still running other Docker containers separately to host services that aren’t public facing.

So instead of relying on a cloud provider to host the infrastructure, I’ve brought it in house. Decentralization is the theme for 2023 it seems, and self-hosting is the way forward.

RAID Isn’t a Backup and Shucking Has Risks

A saying so old that it has its own website and is heavily mentioned in forums filled similarly interested individuals. And it has even been meme’d.

RAID isn't backup meme.
(not my work, just one of many immediate results)

So last month, as I was processing more pictures from the latest play date, I noticed several pictures in the main archive weren’t loading. Strange, but I didn’t pay any mind. See, my current Lightroom setup is a repurposed Thinkpad T470 from several years ago that just happens to have a ton of RAM. But the little i5 inside? It is a poor little 6200U. Skylake, but low voltage and from 2015. And the thermals on the Lenovo aren’t made for Lightroom—the poor little thing overheats if I run it with the lid closed. This all happened because my desktop started glitching badly and refusing to boot. It is either the memory or the video card and I don’t have the spare hardware to isolate it. And my 2014 vintage Mac Mini is similarly limited processing power wise, and even more so RAM wise.

The crazy part is of all these old computers, it was my desktop with a separate GPU that couldn’t power the Dell U3818DW at its native resolution, necessitating a GPU upgrade a few years ago. But to be honest, you can tell the laptop is hurting (mostly through graphics artifacts/glitching). The Mac Mini, however, somehow pumps out the video for both the 38″ Dell and my 24″ Asus, enabling a gigantic dual monitor setup that I hope to continue once I upgrade.

But back to the RAID issue.

As seen a few years ago, I migrated to a BTRFS setup. One big motivation was to avoid bitrot. XFS certainly didn’t have that capability, and over the years I have noticed a few glitches in my data. So even though I didn’t invest in ECC, my thinking was at least the filesystem should know about such things, and if possible, fix them.

So as the poor little Lenovo tries to render 1:1 previews for the latest picture load, I notice a few of the older pictures weren’t loading. I move most of the pictures over to the NAS to minimize the pictures stored on the laptop alone, so sometimes it is just the WiFi. But not this time.

Indeed, later on, I look in the NAS logs and I see crc errors. Lots of them. Some digging around in the btrfs device stats showed one of the drives in my RAID5 array giving me a ton of corruption errors. So I immediately run a scrub on the array to try and fix the data. Since this btrfs volume is setup with RAID5 data and RAID1C3 metadata, I wasn’t too concerned about the array crashing. But I wanted to get to the bottom of this.

Eventually, the scrub hits the bad file and fixes up the file. But as the scrub keeps running, it finds more and more errors. Which is a bit unsettling and now making me wonder how stable the array really is. A few more scrubs later and I notice the errors are transient as well. Sometimes the drive spits out tons of errors and sometimes it is humming along just fine. So because it is configured with RAID5 data, I ran a scrub on each of the individual devices. In general, RAID5 and btrfs comes with significant warnings because of a variety of issues documented on the development page. Scrubbing the individual devices in the RAID5 is one of the guidelines from the developers. And who am I to question them. But if you too are venturing down the btrfs RAID5 path, read through that email from the developers because it sets the ground rules, and foundational expectations.

Hrmm, more errors in the array, but all concentrated on one drive. Well, that’s good I guess. But then, weirdly, another one of the arrays in the NAS starts glitching. That’s odd too. By now, I’m having flashbacks to my desktop system glitching out with weird rebooting loops. So I dig the server out from its home at the bottom of the shelf and give it a good vacuum before I go in and take a look to see what’s going on. Since the errors are concentrated on one drive, I pull the problematic drive out and immediately see the problem—a loose piece of tape on the SATA connector.

Huh? Tape? What?

You see, the 10TB drives I rebuilt the array with were shucked. What’s that? Well, when you’re a hard drive manufacturer, you want to charge extra for those dorks that are trying to build a home NAS. You slap some extra warranty on the drive, and then you juice the price. But you still have to sell to the unwashed masses. So you take some drives and you throw them in a plastic case and sell those. But the unwashed masses have highly elastic demand, so you gotta cut your price to meet this month’s sales targets. So you shuck some external drives, take their internals, and use them in your NAS. But WD, particularly for their white label drives, decided to implement something slightly different.

In the new SATA specification, the ability to disable power to the hard drive is included. More particularly, this changes the behavior for the third SATA pin, which was previously tied to P1 and P2, according to Toms Hardware. And WD’s own documentation confirms this new aspect of the SATA standard. But fortunately, the Internet has found the fix: a piece of tape.

When I pulled off the SATA controller, the tape was wrinkled and not really blocking the third SATA pin properly. After a few minutes, I reapplied the tape and slapped the drive back into the system. And while I was in there, I checked all the cables on all the drives. The SFF-8087 to SATA cables I got when I built the server were barely long enough, so the tension is a little tight.

After closing up and booting back up, all the drives came online which was a good sign. And afterwards, I ran a series of scrubs on all arrays. No more errors! And Lightroom is happily pulling up pictures from the NAS. Just slowly, very slowly.

It’s been about a month now since that crop of errors and things have worked smoothly since. Knock on wood. Glad these old drives are still working well and have plenty of space for more play date pictures!

Self-hosting and Embracing the Cloud

The computing field is always in need of new cliches.

Alan Perlis

My self-hosting journey is an odd one. Once upon the time in college, my computer was simultaneously my media center, my workstation, and a server. Self-hosting was how you did it. Back then, I also hosted websites on Pair.com. After my stint as an IT guy, I lost interest in that tinkering so my skills withered for the better part of a decade.

So when I first got this old fashioned blog back online I went with an old reliable host: Dreamhost. They’ve served me well before and made things easy. Simple shared hosting. Dreamhost gives more access than many other shared hosts (e.g., SSH access), but you didn’t have full control of the system. They were far better than some of the hosts I used earlier in this century. Remember iPaska? Yea those guys were terrible.

But that was just how things were done in the early 2000s. They were simpler times where you didn’t have full access. Instead, everyone had their own control panel of some sort, and they made it easy to install common applications like WordPress. Dreamhost was a competent shop and provided a reliable service (unlike iPaska). And although people complained about the speed of the service, I never had a problem. I also didn’t have that much traffic but that’s another issue for another day.

As with all things, times changed. Dreamhost is still here, providing the shared hosting experience. They sell a good service and continue to run it competently. But the big boys (Google, Microsoft, Amazon) now sell you cloud services and also offer free levels for people to use. Sure it isn’t a 12-core processor with gobs of memory, but it is more than enough to host a few web apps. And it isn’t like I get that kind of traffic anyway. All I need is a decently fast system that I can SSH into and have root on. What I would’ve given for this level of access back when I was younger.

The big boys are appealing, but there is a dark horse in the cloud race: Oracle Cloud Free Tier. They give you two AMD compute VMs, and you can get up to Arm Ampere instances, all for free forever. The AMD compute VMs are easier to get, depending on which region you’re interested in your instance living. And they let you use standard Linux distributions including Ubuntu. They’re not a big name in the space, but boy it is hard to argue with two free AMD VMs. It isn’t the fanciest (1 GB memory each) but it is more than enough to handle a few web apps.

With a free system like that online, I’ve spent a little time here and there over the past month to get everything setup. Lately I’ve started using Docker more at home to manage some of the applications hosted on the server. That has helped simplify deployment even though it isn’t as efficient as installing everything on bare metal. But hey the AMD VMs have the resources. WordPress has its official Docker image, so I used one of those variants as my base. The good people at linuxserver.io provided the database, and I tried out a reverse proxy of a more recent vintage with Caddy. The end result is a self-hosted WordPress instance that has a valid SSL certificate that autorenews. Not bad for the price of free, with a little bit of tinkering time over the past few weekends.

Now I’ve expanded on my self-hosted journey. I’ve created a Wallabag instance for my read-it-later service. I don’t commute anymore thanks to remote life, so I don’t have the same amount of idle time every day to read through the day’s articles on Pocket. But I want to guarantee that my articles are always there for me. Even though Pocket is owned by Mozilla now, I wanted to self-host if possible. And Oracle’s AMD VMs are more than enough to meet the task.

Now, not only am I back to self-hosting this blog and some useful tools, I’m back to tinkering. It feels nice after so many years away.

Drinking the M1 Kool-Aid

So I started a new job at the turn of the new year. For the first time, I get the chance to use a Mac as my main daily driver. This is in stark contrast to the entirety of my professional life which has always revolved around some sort of a Windows based system. Even when I was the weirdo running Linux on my laptop, my main system was a Windows system. From Windows 2000, XP, 7, and ultimately 10, it all revolved around Windows.

Suffice it to say, converting to a Mac dominated workflow was pretty different. Granted, for the last 7-8 years, I have had a Mac for personal use, and for occasional light work. So I am certainly familiar with the system. But using it on a daily basis is a very different prospect.

I was fortunate enough to receive one of the new MacBook Air systems with the Apple M1 processor. This is the direct successor to my Intel based MacBook Air that is from the earlier half of the 2010s. The long and the short of it is that the new M1 MacBook Air is absolutely amazing. From the effortless performance to the snappiness of all the applications to the seamless translation of Intel-complied applications using Rosetta 2, it is an amazing system. Although I’m not naive enough to believe that Macs don’t get viruses (e.g., new M1 compatible Mac malware), it certainly is a nice and seamless system. This, combined with a workflow centered around Dropbox Business, is bringing a fresh look to my daily work.

And fortunately, the M1 does not seem to have any of the issues I mentioned in an earlier post about my Dell U3818DW monitor. Except for some minor glitches when the Air wakes up from sleep, the monitor and the laptop play together perfectly using the USB-C cable to charge the laptop, transfer the video signal, and transfer the keyboard and mouse signals. Just one cable to make it all work. Admittedly, it is a slick solution. I sometimes hook in an additional cable (line-out to the integrated amplifier) for audio, but more often than not I simply AirPlay music from the phone or the Music app, or simply stream music from one of several Internet radio stations. Seamless and reliable, the perfect combination.

Unfortunately, we still work primarily in Microsoft Word. A serviceable software package but it would be nice to have iterated on the modern word processor somewhat. Maybe sometime soon there’ll be innovation in this end of the daily work software/hardware stack in the near future.

Rediscovering Internet Radio

Being old enough to have run my computer all night to rip a CD into MP3 format, I remember the genesis of Internet radio. When iwas first released onto the world, now anyone could be a radio host.

And holy shit do a lot of people have terrible music taste.

But this was also the genesis of the live stream. Of the democratization of content. And helped emphasize how sometimes, a curated feed is exactly what we want. Sorta like how Netflix is in fact a paradox of choice.

Similarly, with music, especially if you have a streaming package like Apple Music, you have at your fingertips a library of music that you will likely never be able to finish in its entirety.

But what do you listen to? At least Apple provides some guidance and recommendations. Along with radio shows where someone curates the content for you.

But sometimes, an aimless radio station is exactly what’s needed.

I’m glad to see that SomaFM continues to be a thing. The local NPR station, WAMU, is often streaming as well in the household. But I’m slowly poking around, seeing what is out there still. With a fairly decent sound system sitting with me at my desk now, what better time to rediscover serendipity in music?

For my setup, I’m using my trusty Airplay Express with my NAD C350 integrated amplifier which are driving two Klipsch R-15Ms that I picked up on a whim at Best Buy. Driving audio to this setup, I have forked-daapd running in a docker container (courtesy of the fine people at linuxserver.io). I’ve created a custom playlist that links directly to my favorite streams, including:

Because it is forked-daapd, it works well with the Apple Remote app on the phone and is therefore wife approved. And works well with Apple Music on our individual devices. The Marantz receiver in the living room purportedly supports Airplay as well, and usually performs well. But for reliability, you cannot beat the Airplay Express even though it was released in 2012. With a firmware update from 2018, it supports Airplay 2 and is more than sufficient for my needs for the time being.