Why I Stopped Trusting the Cloud (and Built My Own)
As social media platforms crumble under corporate greed and political pressure, I've decided to take matters into my own hands. This is the story of how I built my own resilient, private infrastructure — reclaiming control over my data, my voice, and my digital future.
We are witnessing the decay of social media platforms in real-time. I've been around the net quite a while – I remember a time before HTTP, in the dark ages of FTP, gopher and Usenet. I think I was probably the first person at my university to download the source code for the very first ever web browser from CERN, installing it on a Sun workstation and looking at the handful of web sites that existed at the time.
The net exploded from there. Even before HTTP, it was always social. I was first hooked on Usenet (I particularly miss alt.alien.vampire – OK, I suppose you had to be there), then moved on to Cix and then LiveJournal (aka LJ) around the turn of the millennium. I loved LJ – it gave me community and a place where I could express myself in a literary way.
Then came Facebook, and frankly the start of the rot. You had to be on there because everyone was on there. I still used LJ, and preferred it, but held my nose and used Facebook because it was either that or abadon my ability to stay in touch with my real-life social network. Twitter started out great, back when it was a handy way to send group texts over SMS.
Then, years later, LJ was bought out by a Russian company with links to the Russian state. Since this was incompatible with my job (and could have been iffy for my immigration status in the US at the time), I had to drop it. I was gutted. Facebook was OK, but never really got me writing the longer form pieces I used to love posting to LJ. Then it started to get weird about actually showing posts to people who followed you, unless you paid substantial money. That really killed it for me. I felt no real draw toward being bothered to write for a platform that didn't care and just saw me as a mark. As Twitter grew, it got less and less friendly, to the point of becoming quite dangerous. Then my main account got stolen, and even though I was able to recover it, my friends list had been deleted and replaced with random bots. So I kind of gave up with Twitter too.
This was the situation for a few years. In the last few months, this slow decline became a cliff edge, with the platform owners visibly and publicly capitulating to a fascist regime. Even my email and other online services started to get questionable, because of the danger that my user data will be shared without my consent. Frankly even the risk that this comms could just be shut down would be potentially devastating.
I already had some sensible mitigations in place, like having automatic continuous backups of my online file storage and documents, etc. But this only addresses the risk of losing access – the terrifying thing, however, is having my user data handed over to someone who might seek to perform trial by database query.
A few months ago I decided to start taking back control of this situation. I have a bit more hardware around the place than most people. My day job involves serious infrastructure, running thousands of machines across multiple datacenters, so putting a solution together probably looked a bit different to me than it might to most people.
Building Xen clusters
I started by repurposing a number of machines I already had running in a rack here in Ireland – a small cluster of Intel NUCs, two Synology NASes, a big (though remarkably cheap, thanks to eBay bottom-feeding!) dual Xeon server with multiple GPUs and 1.5TB of RAM, and a repurposed gaming box. A bit of a motley crew, but after some experimentation I landed on a solution based on the Xen hypervisor, specifically the open-source xcp-ng variant. I set everything up as a pair of Xen clusters, one with the NUCs and the GPU server in it, and a second with the single gaming PC. It would have made perfect sense (and would have been far easier) to build a single cluster, but I specifically wanted to be able to test making multiple clusters talk to each other seamlessly.
Networking was interesting. The NUCs are all a couple of years old now, 6 core/12 thread 10th gen Intel CPUs with 32GB of RAM and two 2TB SSDs each. They only have 1GB ethernet as standard, so I am using external Thunderbolt to 10GbE adapters. The for-real server has 10GbE on the motherboard, so I added a 10GbE card to the gaming PC (a 13th gen Intel with 128GB of RAM). A bit of faffing around later, I had three VLANs set up, such that all the machines in both clusters were visible on a management network, and each cluster had its own separate local network. All of these are on different subnets, so comms can be routed with no need to do bridging (this is critically important for scalability, though admittedly bridging would have worked fine at this scale).
I created OPNsense firewall VMs in each cluster, each set up with dual virtual network ports so they could see both the management network (and therefore also the internet) and their internal networks. I set them up to do DHCP and NAT for their internal networks, and set up a Wireguard tunnel between the clusters, which made it possible for VMs running on machines on both clusters to both see the internet and to be able to directly talk to each other.
This was my first proof-of-concept. It took a while to get right. I think I reinstalled xcp-ng on the NUCs four or five times, and took a few attempts to get the firewall configs right, but in the end it worked and was rock solid.
Taking the next step
At this point, I only had a relatively slow internet connection, and by policy wasn't interested in using a commercial reverse proxy service like Cloudflare (too much US exposure, and some shocking business practices). On a tip from a friend, I decided to rent a bare metal machine from Hetzner in Germany. Basically, this is a whole physical server sitting in a rack in a datacenter that I have complete control over, and that has an unmetered internet connection. As is my preference, I installed xcp-ng on it, and a series of VMs including an OPNsense firewall, nginx reverse proxy, a primary DNS server and an email server. It took a while, a few weeks of tweaking in evenings and at weekends, to get everything going, but I ended up with the machine set up as a single-machine cluster with a software-only internal network and my usual NAT and Wireguard tricks so everything can see everything.
Deep breath...
It's a whole lot of overkill, but what it effectively means is that I can now spin up VMs anywhere on any of the clusters that can serve the web just as if they were sitting in the DC in Germany. Running machines in my rack here is much cheaper than in a DC, particularly for the more powerful machines with lots of disc and RAM, and maintaining them involves a 15 second walk from my office to my machine room. It also makes it quite a bit harder for someone to get physical access to machines containing user data, because it just simply isn't where it appears to be from the outside.
Running this Ghost-based blog server uses an absolutely tiny amount of the capabilities of the infrastructure. I have, let's say, plans.
Backups might be boring, but...
One thing that I absolutely love about VM-based virtualized infrastructure is just how much easier it makes backing up and restoring systems. With xcp-ng, there isn't really much to back up on the base system (though I do it anyway). The VMs are where it's at, and the Xen Orchestrator management front end makes it really straightforward to do disk-level backups automatically across the entire network. These get written to a primary NAS, which backs up these backups to a second volume locally and to a second physically separate NAS which keeps deeper history. Restoring a VM is ridiculously easy, you just restore from a single large backup file and boot the VM and you're done. I keep both complete image and incremental backups, so I can also pull back individual files if I need to.
Conclusions, and that Sovreignty Thing
This was a LOT of work, and I totally get it that this kind of solution is probably out of reach for most people reading this. There are simpler and cheaper solutions, but like I sad, I have plans that don't end with publishing this blog.
That said, doing what you need to do to break your reliance upon US-based/controlled platforms makes a huge amount of sense at this point in time. Regardless of your political views, Trump's unpredictability is a significant liability, both at a personal and business level for everyone, both inside and outside the US. Taking back the control of your user data and housing it somewhere that has data protection laws with real teeth (GDPR), makes a huge amount of sense. If you happen to be a member of a minority that the Trump regime is specifically targeting for harm (like me), this is doubly important.
The EU is taking digital sovreignty seriously. I feel much safer living in Ireland, which is an EU member state, and housing my infrastructure here and in Germany.
Over the next weeks and months I hope to post more about this, both from the political and the technical points of view. My intention is to not (just) rant about this on the internet, but to hopefully actually do things to help in a practical way.
If you have questions or comments, feel free to contribute, even if you just want to remind me that I'm crazy to overkill this to the extent that I am. But I knew that. 😸