Skip Navigation
102 comments
  • I should have learned Ansible earlier.

    Docker compose helped me get started with containers but I kept having to push out new config files and manually cycle services. Now I have Ansible roles that can configure and deploy apps from scratch without me even needing to back up config files at all.

    Most of my documentation has gone away entirely, I don't need to remember things when they are defined in code.

  • For me:

    • Document things (configs, ports, etc) as I go
    • Uniform folder layout for everything (my first couple of servers were a bit wild-westy)
    • Choosing and utilizing some reasonable method of assigning ports to things. I do not even want to explain what I need to do when I forget what port something in this setup is using.
  • I would have gone with an Intel CPU to make use of iGPU for transcoding and probably larger hard drives.

    I also would have written down my MariaDB admin password... Whoops

  • My current homelab is running on a single Dell R720xd with 12x6TB SAS HDDs. I have ESXi as the hypervisor with a pfsense gateway and a trueNAS core vm. It's compact, has lots of redundancy, can run everything I want and more, has IPMI, and ECC RAM. Great, right?

    Well, it sucks back about 300w at idle, sounds like a jet engine all the time, and having everything on one machine is fragile as hell.

    Not to mention the Aruba Networks switch and Eaton UPS that are also loud.

    I had to beg my dad to let it live at his house because no matter what I did: custom fan curves, better c-state management, a custom enclosure with sound isolation and ducting, I could not dump heat fast enough to make it quiet and it was driving me mad.

    I'm in the process of doing it better. I'm going to build a small NAS using consumer hardware and big, quiet fans, I have a fanless N6005 box as a gateway, and I'm going to convert my old gaming machine to a hypervisor using proxmox, with each VM managed with either docker-compose, Ansible, or nixOS.

    ...and I'm now documenting everything.

  • I'd use Terraform and Ansible from the start. I'm slowly migrating my current setup to these tools, but that's obviously harder than starting from scratch. At least I did document everything in some way. That documentation plus state on the server is definitely enough to do this transition.

  • Not accidentally buy a server that takes 2.5 inch hard drives. Currently I'm using some of the ones it came with and 2 WD Red drives that I just have sitting on top of the server with SATA extension cables going down to the server.

  • I'd put my storage in a proper nas machine rather than having 25tb strewn across 4 boxes

  • I already have to do it every now and then, because I insisted on buying bare metal servers (at scale way) rather than VMs. These things die very abruptly, and I learnt the hard way how important are backups and config management systems.

    If I had to redo EVERYTHING, I would use terraform to provision servers, and go with a "backup, automate and deploy" approach. Documentation would be a plus, but with the config management I feel like I don't need it anymore.

    Also I'd encrypt all disks.

    • Also I’d encrypt all disks.

      What's the point on a rented VPS? The provider can just dump the decryption key from RAM.

      bare metal servers (at scale way) rather than VMs. These things die very abruptly

      Had this happen to me with two Dedibox (scaleway) servers over a few months (I had backups, no big deal but annoying). wtf do they do with their machines to burn through them at this rate??

      • I don't know if they can "just" dump the key from RAM on a bare metal server. Nevertheless, it covers my ass when they retire the server after I used it.

        And yeah I've had quite a few servers die on me (usually the hard drive). At this point I'm wondering if it isn't scheduled obsolescence to force you into buying their new hardware every now and then. Regardless, I'm slowly moving off scaleway as their support is now mediocre in these cases, and their cheapest servers don't support console access anymore, which means you're bound to using their distro.

    • I would use terraform to provision servers, and go with a “backup, automate and deploy” approach. Documentation would be a plus

      Yea. This is what I do. Other than my Synology, I use Terraform to provision everything locally. And all my pi holes are controlled by ansible.

      Also everything is documented in trillium.

      Whole server regularly gets backed up multiple times, one is encrypted and the other via syncthing to my local desktop.

      • Terraform is the only missing brick in my case, but that's also because I still rent real hardware :) I'm not fond of my backup system tho, it works, but it's not included in the automated configuration of each service, which is not ideal IMO.

  • I'd plan out what machines do what according to their drive sizes rather than finding out the hard way that one of them only has a few GB spare that I used as a mail server. Certainly document what I have going, if my machine Francesco explodes one day it'll take months to remember what was actually running on it.

    I'd also not risk years of data on a single SSD drive that just stopped functioning for my "NAS" (its not really a true NAS just a shitty drive with a terabyte) and have a better backup plan

  • I have things scattered around different machines (a hangover from my previous network configuration that was running off two separate routers) so I’d probably look to have everything on one machine.

    Also I kind of rushed setting up my Dell server and I never really paid any attention to how it was set up for RAID. I also currently have everything running on separate VMs rather than in containers.

    I may at some point copy the important stuff off my server and set it up from scratch.

    I may also move from using a load balancer to manage incoming connections to doing it via Cloudflare Tunnels.

    The thing is there’s always something to tinker with and I’ve learnt a lot building my little home lab. There’s always something new to play around with and learn.

    Is my setup optimal? Hell no. Does it work? Yep. 🙂

  • I have ended up with 6x 2TB disks, so if I was starting again I'd go 2x10TB and use an IT mode HBA and software RAID 1. I'd also replace my 2x Netgear Switches and 1x basic smart TP-Link switch and go full TP-Link Omada for switching with POE ports on 2 of them - I have an Omada WAP and it's very good. Otherwise I'm pretty happy.

  • That's a pretty good question: Since I am new-ish to the self-hosting realm, I don't think I would have replaced my consumer router with the Dell OptiPlex 7050 that I decided on. Of course this does make things very secure considering my router is powered by OpenBSD. Originally, I was just participating in DN42 which is one giant VPN semi-mesh network. Out of that hatched the idea to yank stuff out of the cloud. Instead, I would have put the money towards building a dedicated server instead of using my desktop as a server. At the time I didn't realize how cheap older Xeon processors are. I could have cobbled together a powerhouse multi-core, multi-threaded Proxmox or xcp-ng server for maybe around 500-600 bucks. Oh well, lesson learned.

  • I would go smaller with lower power hardware. I currently have Proxmox running on an r530 for my VMs, plus an external NAS for all my storage. I feel like I could run a few 7050 micro's together with proxmox and downsize my NAS to use less but higher density disks.

    Also, having a 42U rack makes me want to fill it up with UPS's and lots of backup options that could be simplified if I took the time to not frankenstein my solutions in there. But, here we are...

102 comments