Well folks, I’m back! Unfortunately it’s because I got laid of and thus I have a lot more free time. I’ve decided to use the time off to try to finish up some personal projects, and brush up on a few skills to add to my resume.
Today I’m going to give a tour of my self-hosting project. This has been on ongoing project in my house for the past year or so. I originally started it as a way to make sure my home automation systems could function even if the Internet connection went down. It’s grown since then, and I now have nearly every service I use on a daily basis hosted locally.
Without further adieu, let’s dive in!
My Data Center
Originally, all of my servers were located in my home office, and connected to an ethernet switch in the basement. This was convenient for doing work on the servers, but it made my tiny office very crowded, and the heat was difficult to tolerate in the summer. I realized I simply had to put this stuff somewhere else; the question was where? There was no really good place on the main floor, and the upstairs does not have any network drops. In the end I realized the only place for it all to go was the basement.
My house is old (at least 120 years), and the basement is only partially finished. To protect the equipment I ended up building a tiny (16 square foot) room just big enough for a rack. The floor and back wall have been epoxy sealed and painted to resist moisture. It isn’t fully enclosed yet, but my plan is to mostly enclose it, and add a blower to blow filtered air into the room and use the resulting positive air pressure to keep basement dust out.
For power I ran a dedicated 20-amp circuit to a single outlet on the back wall. Using a 20-amp circuit gives me more headroom, plus it means I can potentially upgrade to a beefier UPS down the road.
Here’s how it looks as of 11/16/2024:
Please pardon the mess; I still have some rewiring to do!
The rack is a four-post 27U unit I picked up on Amazon (I would have preferred a full-size rack, but the basement doesn’t have enough clearance).y In this rack I have the following:
- A 48-port cat 7 patch panel. All of the existing ethernet drops in the house will terminate here eventually.
- A Juniper EX3400 48-port PoE switch, with four 10GBase-T modules installed, and dual power supplies.
- An Intel N5105 rack mount server with six 2.5Gbase-T ports, running OPNsense.
- My NAS. This is a custom Intel i3-9100F server with 64 GB of RAM. It has five 9.1 TB spinning rust drives.
- Another custom build, this one is an Intel i7-13700K with 64 GB of RAM and an RTX4090 GPU
- Two refurbished Dell PowerEdge R630s with dual 28-core Xeon processors, 128 GB of RAM, dual power supplies, and iDRAC 8s.
- A small NUC I decommissioned a while back and now use as a CheckMK server.
- An Eaton 1500VA UPS. It can’t power the entire rack for long, so everything shuts down gracefully when power fails.
The i7-13700k and the two R630s comprise a three-node Proxmox cluster. All user-facing services run as VMs on this cluster. Storage for the VMs is provided by the NAS, which runs TrueNAS Scale. This setup allows me to move services between nodes easily.
To monitor everything I run CheckMK. This is running on a dedicated NUC and is not dependent on any other services being up to function. Alerts go to my email account and a Slack channel.
Virtual Machines
User-facing services are provided by virtual machines running in the Proxmox cluster. The services I currently host are:
- Caddy
- Webmail (Postfix, Dovecot, and SnappyMail)
- FreeIPA
- DNS (PiHole + Bind)
- Home Assistant
- Music Assistant
- Ollama (GPU)
- CodeProject.ai (GPU)
- Whisper (GPU)
- Piper (GPU)
- Emby
- FreePBX
- AgentDVR
- Vault Warden
- Joplin
- Homebox
- Paperless NGX
- Mosquitto
- A private Minecraft Java server
Some services are on dedicated VMs, such as AgentDVR, Emby, and FreePBX. The less resource-intensive services run as Docker containers on a single VM. Most of these services are behind the Caddy reverse-proxy, which provides SSL for my services through a Lets Encrypt wildcard certificate.
Fault tolerance is provided in two ways. Critical services (FreeIPA and DNS) are setup as pairs of servers bound to separate Proxmox nodes. They utilize local storage on the nodes, so they continue to function in some capacity if the NAS is down. For everything else the shared storage is used and the services can migrate to another Proxmox node if necessary. The exception is services marked with “(GPU)” above; they are not redundant as only one node has a GPU.
All VMs are backed up daily by Proxmox to a dedicated TrueNAS share. From there, TrueNAS backs itself up to a set of Backblaze buckets.
Final Thoughts
My setup is good, but still has a few weak points: the switch, firewall, and NAS. Of these three, the only ones I have any immediate plans to fix is the NAS. Once I am gainfully employed again I’d like to pick up a couple more PowerEdge servers and load them up with NAS-grade SSDs to use as a pair of TrueNAS servers.
I’d also like to upgrade the UPS down the road, preferably to a rack-mount 20-amp model. I would like to eventually be able to keep the rack on battery power long enough to switch the circuit over to a generator. This is going to involve me adding a second service panel, and moving select circuits over to it. This panel would have a generator switch and an outdoor plug for connecting my generator. I had planned that for this past summer, but I ran out of time, so it will probably be next summer now.
As for software, there are several other things I’d like to add. In the very near term I will be setting up a Romm instance for playing retro games via a browser. Medium term I’d also like to set up an Authentik instance to provide single sign-on to as many services as possible.
Finally, and most importantly, I am working on thoroughly documenting everything, in case something unfortunate happens to me. I’ll go into more detail on that in a future post.
In my next post I will go more into my network configuration, including how I provide secure outside access.
If you have any questions. comments, or suggestions, you can comment below, or contact me via BlueSky at @area73.bsky.social.