Skip Navigation

Your favourite piece of selfhosting - Part 1 - Operating System

Hello everyone,

I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don't (just) want tell my point of view but thought about a series of posts:

Your favourite piece of selfhosting

I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:

Operating systems and/or type 1 hypervisors

You don't have to be an expert or a professional. You don't even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn't? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?

I am eager to hear about your thoughts and stories in the comments!

And please also give me feedback to this idea in general.

80 comments
  • I've been using NixOS on my server. Having all the server's config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.

    I don't use it on my personal machine because the lack of fhs feels like it'd be a problem, but when selfhosting most things are popular enough to have a module already.

  • Proxmox Virtual Environment (PVE, Hypervisor), my beloved. Especially in combination with Proxmox Backup Server (PBS).

    My homelab would not exist without Proxmox VE, as I'm definitely not going to use Nutanix or VMWare. I love working with linux and Proxmox VE is literally debian with a modified kernel and a Management Webinterface on top.

    I first learned about Proxmox VE in my company, while we still had VMWare for us and all of our customers. We gradually switched everyone over to Proxmox VE and now I'm using it at home too. Proxmox is an Austrian (my country) company, so I was double hyped about this software.

    A few things I like most about Proxmox VE

    • Ease of access to the correct part of the documentation you currently need (*)
    • Open Source
    • Company resides in my country (no US big tech walled garden)
    • Linux / Debian based, so no learning new OS's and toolchains
    • Free version available
    • Forum available and actually used

    (*) What I mean by ease of access to the correct part of the documentation is: Whenever you're in the WebUI and need to decide on some settings, there's a button somewhere on the same page which is going to lead you directly to the portion of the documentation you need right now. I don't know why this seems like such a great luxury, every software should have something like this.

    Next steps

    My "server" (some mini PC with spare parts I already had) is getting too weak for the workload I put it through, so I'm going to migrate to a better "server". I already have a PC and most of the necessary parts, I just need some SSDs and an AMD CPU.

    Even migrating from PVE (old) -> PVE (new) couldn't be easier:

    • PVE (old): create last backup to PBS, shut down PVE (old)
    • PVE (new): add PBS, restore Backups
    • ???
    • profit

    I think it's great to have a series posting about personal achievements and troubles with selfhosting. There's so much software out there, you always get to see someone doing something you didn't even know could be done or using a software you didn't realize even existed. Sharing is caring.

  • I'm new to all this.

    Synology: I was using Synology before and getting started with trying some Docker containers. The Synology was very underpowered and containers kept crashing or being shut down (from resources running out I guess) so I wanted to upgrade.

    Comments seemed to suggest it is best to keep the Synology as purely a NAS and use a mini PC for compute, so that's what I went for. Got a 12th Gen Intel mini PC pretty cheap on eBay to play around with.

    Debian - I've put Debian with KDE on the mini PC server. I was looking into TrueNAS or Unraid to consist what I should try learning. My brother (rightly) said there's no reason to over complicate things when I don't need functions of those OS and don't understand them. The one place the Linux community seems to be united is in recommending Debian for a server for being rock solid and stable. I've been very happy with it.

    Spent my week off figuring out Docker, mounting NAS drives on the server PC, troubleshooting the problems. Got a setup I'm really happy with and I'm really happy I went with Debian.

    • Debian - I've put Debian with KDE on the mini PC server.

      Save your resources on the mini pc by getting rid of KDE, desktops can take quite a lot of resources to run!

      If you aren’t familiar with the BASH shell it’s essentially the heart of every Linux/GNU based operating systems, no need for a clunky GUi on a server.

      Key commands:

      • cd == Change Directory
      • sudo == Root privileges
      • mkdir == Make directory
      • rm -f == Remove file/directory with force
      • touch == Make a new file
      • nano == Text/File editor
      • cat == Read file contents and print to shell

      Commands don’t need to be complicated! For example nano /home/SomeUser/Downloads/SomeRandom.txt will open the text editor to SomeRandom.txt in the /Downloads directory of SomeUser

      • Thanks. I do know almost all those commands, but I'm not quite comfortable with using konsole/SSH exclusively yet. KDE is what I'm most familiar with from my desktop PC and I thought it would be easier to set up knowing where settings etc are. Also I use a Guacamole Docker app to access the server's desktop (my personal machine) when I need to do some personal task while at work. That may change as I get better at this and learn more.

        Edit: I don't want to mess with the server now, but I'll try to put LXQT at some point to save some resources. I don't trust myself to remove KDE cleanly and install a different a different DE without destroying the setup.

    • I have pretty much the same setup. Works like a charm.

      • What are you running on your server? I'm looking for more ideas.

        I've got loads of stuff up and running, but now it is all quietly functional and I'm withdrawing from the enjoyment if setting up something new. I've recently had to delete a couple of Docker apps which weren't really very useful for me, but I enjoyed setting them up and liked seeing a long list of healthy containers in Dockge.

  • I think this is a great idea. With such a foundational deployment concept like OS there are so many options and each can change the very core of one's self hosted journey. And then expanding to different services and the different ways to manage everything could be a great discussion for every existence level.

    I myself have been considering Proxmox with LXCs deployed via the Community Scripts repo versus bare metal running a declarative OS with Docker compose or direct packages versus a regular Ubuntu/Debian OS with Docker compose. I am hoping to create a self-documenting setup with versioning via the various config and compose files, but I don't know what would end up being the most effective for me.

    I think my overarching deployment strategy is portability. If it's easy to take a replacement PC, get a base install loaded, then have a setup script configure the base software/user(s) and pull config/compose files and start services, and then be able to swap out the older box with minimal switchover or downtime, I think that's my goal. That may require several OS tools (Ansible, NixOS config, Docker compose, etc.) but I think once the tooling is set up it will make further service startups and full box swaps easier.

    Currently I have a single machine that I started spinning up services with Docker compose but without thought to those larger goals. And now if I need to fiddle with that box and need to reboot or take it offline then all my services go down. I think my next step is to come up with a deployment strategy that remains consistent, but I use that strategy to segment services across several physical machines so that critical services (router, DNS, etc.) wouldn't be affected if I was testing out a new service and accidentally crashed a machine.

    I love seeing all the different ways folks deploy their setups because I can see what might work well for me. I'm hoping this series of discussions will help me flesh out my deployment strategy and get me started on that migration.

  • openSUSE MicroOS

    I've only tried it out on a VPS, so I'm not completely sold on it yet, but I do think I'll be switching to it eventually. I'm currently on Leap, but since almost everything is containerized, I'm not getting much benefit from the slow release cycle.

    For your questions:

    Why would you want to try it out? Did you try it out already? What worked great? What didn’t

    The main appeal is unattended, atomic updates using bleeding edge packages. You keep your apps as separate from the base system as possible (containerized), and the base handles itself.

    My main issue is with the toolbox utility, which runs a container to hold userland utilities for debugging stuff. So far, it has been buggy with the underprivileged user I configured, and I'd really rather not login as root. I've worked around it for now, but it leaves a lot to be desired.

    Where are you stuck right now? What are your next steps?

    Mostly figuring out how I want to handle my VPN (for exposing LAN services to the outside world) config. My options are:

    • containerize, and configure iptables rules to route traffic properly
    • install the needed tools to the base system and configure it on the host

    The main sticking point is that I need HAProxy in front and route traffic to the given device, so the VPN and HAProxy need to talk. The easiest solution is to put both on the host, but that breaks the whole point of MicroOS. The ideal is to have both the VPN and HAProxy containerized, but I ran into some issues with podman.

    Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?

    This is definitely a veteran system right now, but I think it's ideal because it means I can completely automate system updates and not worry about my apps breaking. It also means I can automate setting up a new server (say, if I move to a different VPS) or even new OS since I only need to deploy my containers and don't need anything special from the OS setup.

    I'm also playing with Aeon on my laptop, but that'd going a lot less smoothly than MicroOS on the server.

  • Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.

    Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I've set it up for myself in my lab accessible only over VPN.

    What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I'm actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.

    Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.

    Let's compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let's just say that it gets very very large if you need it to. Only it's free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.

    Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven't tested it.

    I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.

    Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.

    Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. http://rohityadav.cloud/blog/cloudstack-kvm/

  • I have been using Proxmox VE with Docker running on the host not managed by Proxmox, and then Cockpit to manage NFS Shares with Home Assistant OS running in a VM. It's been pretty rock solid. That was until I updated to Version 9 last night, it's been a nightmare getting the docker socket to be available. I think Debian Trixie may have some sort of extra layers of protection, I haven't investigated it too much, but my plan tomorrow and this week is to migrate everything to Debian 12 as that's the tried and true OS for me and I know it's quite stable with Cockpit, docker and so forth with KVM for my Home Assistant installation.

    One other OS for consideration if you are wanting to check it out is XCP-NG which I played with and Home Assistant with that was blazing fast, but they don't allow NFS shares to be created and using existing data on my drives was not possible, so I would've had to format them .

  • Favorite heavyweight Type 1 hypervisor: XCP-ng. It's open source, runs on a ton of enterprise and consumer-grade hardware, has always been rock stable for me, even when forgetting to update it for like 6 months, still ran everything like a champ.

    I need to try ProxMox, has some cool features. XCP-ng is pretty intuitive though, UI makes sense and is cleaner than Proxmox. The integration in Proxmox with the Incus project is pretty cool though, especially being able to run VMs and containers and manage them together. I've been thinking of trying that and seeing how it goes.

    For containers, I just install Debian and run Docker on there. Stable, simple, nothing fancy. If I need something more up to date, I typically use Ubuntu Server.

  • I'm gonna be simple : Syno DSM with portainer.

    Hardware and software. Simple, for my simple needs.

    • My old DS916+ is great at the ile services but too weak for computing, so I have a reclaimed business laptop for the services. I could not imagine running anything on the DS.

      • I run jellyfin, freshrss, actualbudget and a few others services.

        Just what I need :)

    • Hypervisor: Debian stable + libvirt or PVE if you need clustering/HA
    • VMs: Debian stable
    • podman if you need containerization below that
  • I run several different ones, Debian is the most, Ubuntu server runs a few and I have a couple of truenas scale instances simply because they have run truenas for years and work well. One is local network only, another is available but is used for storage and storage alone via s3/minio and sftp and duplicati

  • archlinux + podman / libvirtd + nomad (libvirt and docker plugins) + ansible / terraform + vault / consul sometimes

    UPD:

    archlinux - base os. You never need change major version and that is great. I update core systems every weekend.

    podman / libvirtd - 2 types of core abstractions. podman - docker containers management, libvirtd - VM management.

    nomad - Hashicorp orcestrator. You can run exec, java application, container or virtual machine on one way with that. Can integrate with podman and libvirtd.

    ansible - VM configuration playbooks + core system updates

    terraform - engine for deploy nomad jobs (docker containers. VMs. execs or something else)

    Vault - K/V storage. I save here secrets for containers and VMs

    consul - service networking solution if you need realy hard network layer

    As a result, I'm not really sure if it's a simple level or a complex one, but it's very flexible and convenient for me.

    UPD2: As a result, I described the applications level, but in fact it is 1 very thick server on AMD Epic with archlinux. XD By the way, the lemmy node from which I write is just on it. =) And yes, it's still selfhosted.

  • Hypervisor: Proxmox (fuck Hyper-V: It's good but soo annoying. Fuck ESXi cuz Broadcom).

    General purpose OS (for servers): Debian (and OMV)

80 comments