Hello /r/homelab and whoever in KIWI is reading this. Little intro, my name is DropSlays (or Andrew as some call me). I run a big gaming community called KIWI that operates quite a few of the most popular Counter-Strike servers on the east coast of the United States. I’m also an avid home-labber and a network engineer by trade pursuing a bachelors degree in Computer Science (2 years left). This blog post will be a highly technical write-up that will go past the normal bounds of what’s considered “KIWI” with lots of information about my personal projects and home lab. If that’s not interesting to you then escape now before the pictures rope you in!
To begin, a short history of my endeavors in the whole lab scene:
I’ve always been the kind of person to try to self-host as much as I can. This began with hosting my own web-accessible file storage and Minecraft servers in early high-school. Soon it began to get out of hand with plenty of craigslist purchases. My lab had lots of ancient HP and Dell servers that really hated working properly.
Over the years with dives into DigitalOcean, Vultr, AWS, and the like, I got sick of paying monthly for shite virtual hardware with metered bandwidth. The time was now, I was ready to start a real home lab. The pictures above show my earliest lab in a very volatile state with much failure and many learning experiences. Taking all of this wisdom gained into account and after much research, I began investing in better hardware. I began by purchasing a Dell R410 with 64G of RAM and dual Xeon E2650s for just under $100 on eBay (steeeeeal). This server runs a majority of the virtualization workloads across my home lab. More on the individual services will come later on in this post.
With the compute resources flourishing, I realized I needed a better network infrastructure. In the pictures above there’s a VERY old Dell PowerConnect 6024 gigabit switch. It handled my needs just fine but the management port is broken beyond repair so I couldn’t configure it at all. Flat networks are boring so I invested in a pretty new Cisco Catalyst 3560G 24p with PoE. Neat!
But that beige rack looks so tacky…
Yes. Yes it did. This is why I hunted for weeks to get some of that fully enclosed short-rack goodness for my lab. It’s an off-brand StarTech 24U rack (I think…) and it’s perfect for what I need. Locking doors, removable side panels, and best of all square holes! Tossing all of my equipment in this thing and getting it perfectly cable-managed was a freaking blast. Side-note: I’m freakishly in love with perfect cable management. I actually enjoy it. Hate me. Anyways, behold the beauty!
Okay cool but what about the colo?
Oh yeah, the colo. That is what this post is about huh? Here goes.
So a bit of an explanation: I don’t own the colo, my company owns and pays for it. We each have our own personal equipment in the rack and our employer has the top half of the rack for the inevitable cloud services division to get going in the very near future. The colo features:
- 42U full-depth APC rack
- 200Mb 95th percentile commit at 1Gb
- 30A power drop at 208V
- 24/7 key card access
- Round-the-clock audio and video security monitoring
- Lots of tools and peripherals in the DC itself for our free use
- Located in an inconspicuous refurbished industrial building
Some of the hardware we’ve got in the rack currently was migrated to this facility from New Jersey in our old colo facility. We recently took a road trip down there to grab all of it and bid them farewell. The distance was too great and the fact that larger customers with higher space and bandwidth requirements were moving in meant higher prices when we would inevitably renew our contract.
I personally own the two Cisco UCS C240 M3s. Most of the other equipment belongs to my friend, he does quite a bit that I won’t get in to here other than the fact that he runs Project1999, the largest, and only officially sanctioned, unofficial EverQuest classic server.
Here are some pictures of the final product:
Now we shall deep dive into what I do with my lab and what extents I go to with other projects that run on it.
I won’t do a total hardware breakdown unless that’s widely requested. Let’s start listing some things I run in my lab:
- Plex (should I even mention this? it’s a staple of most labs at this point)
- This blog (and a few other supporting sites for other projects)
- Lots of file storage (and I mean lots as in over 20TB)
- Client VPN for my devices when I’m out in the wild
- Site-to-site VPNs with a few of my friends with pfSense for distributed labbing ( thanks Muffin! I owe you a VM or two, PM me )
- A Pi-hole box with 16GB of RAM that I’ve configured all my family, friends, and customers to use (over 200 devices and ~1000 lookups per minute peak)
- Quite a few CS:GO servers for my leisure and development
- Quite a few more CS:GO servers for my gaming community
- A private GitLab server for a few friends and I to keep our projects under wraps
- A windows domain (everybody’s gotta learn)
- Numerous tiny VMs for development
- Super swanky Grafana dashboards for everythinggggg
- Play with Docker more (I’m an avid software engineer and I’ve used it heavily in the past but I’d like to get more into container orchestration and microservices than anything)
- I’m 10Gig across servers right now but I’d love to play around with higher bandwidth like 40Gig or God forbid 100Gig *wallet screams in terror*
So as this blog post gets wrapped up, I urge you to ask me questions about anything. This project has been full of mistakes and learning from them so let me share my experiences with you. Comment below or PM me on Reddit /u/dropslays for any of that jazz.
Thanks for stopping by!
-Andrew “drop” DeChristopher