Skip Navigation

When hosting on my own machine, should I use a virtual machine or not?

Hey all,

Just wondering what the consensus is on hosting directly on a computer versus virtualization? Right now I'm hosting my Lemmy instance on a Hetzner VPS, but I would like to eventually migrate to my Linux box at home. It currently runs as a media PC (Ubuntu) in the living room, but I always intended to self-host other software on the side since it should be more than capable of doing both (Ryzen 5600G, 16gb DDR4).

I'm just torn though - should I host a virtual machine on it for Lemmy, or run it directly on Ubuntu as-is? I plan to do some further self-hosting projects as well later down the line.

16 comments
  • Run everything in docker containers. Much easier to manage than virtual machines and lighter on resources.

  • I run everything in docker compose and the two wins I feel like I get out of doing so are:

    • State in well-identified volumes. I've run multiple services on shared bare metal, and multiple services in isolated VMs. In both cases, I was equally terrified of upgrades and migrations because I frequently had no idea where in the filesystem critical data was stored. I would often have some main data dir I knew about, but little confidence that other important data wasn't stored elsewhere. With docker volumes, if I can restart a container without losing data I can migrate to a new host without losing data just by copying my volume dirs. And I restart containers frequently so I always have high confidence in my ability to backup, upgrade, and migrate.
    • Library and os-userspace isolation. This is the thing docker is known for, I never worry that upgrading a lib for app-a is going to break app-b. This is really a secondary benefit for me, though. I rarely had this problem even on shared metal with many apps, and never experienced it in isolated VMs. For me it's a secondary benefit to the nice volume management tools.

    Docker is in no way unique in its ability to provide state management, but ephemeral container enforce good state management, and the docker tools for doing it provide a very nice experience, so it is my preference.

  • Using your home server as a VM host will put more load onto it. The idea of creating one VM per software (eg. Lemmy, a web server, and IRC bouncer, etc...) is a good idea in terms of "replaceability", because you can easily upgrade OS, or restore a service to an earlier point in time, or redistribute resources, etc... The main downside is that you'll add a big overhead to each service resource, as a VM require the full OS to boot. So if you run 5 services in 5 VM, the OS will consume 6 times more RAM, disk and CPU, on idling.

    If you can afford that, then do it ! It makes for a better separation of concerns and is definitely easier to maintain on the long run.

    Otherwise, you have two solutions. The first one is using containers. It's just like the previous method, except that it won't boot the full OS, and just start your service, using the already running kernel. Many self hosters use this method, so it's well documented and very likely that you'll find a container "package" for any service you might want to run. Note however that it adds another layer of complexity in terms of resource sharing (especially the networking part) that you'll have to understand in order to secure properly.

    The last method is hosting your services directly on bare metal. This is the simplest method IMO, but it will get tricky when the software you need is not packaged for your distribution, as you'll need to handle the service script and upgrades manually. This is however the lightest method regarding resource usage, as there is absolutely no overhead to running a service. And depending on the OS you'll be using (in your case Ubuntu as it seems), you will benefit of a containerization like resource control thanks to systemd and cgroups (other mechanisms exists to control resources but this one's fine).

    My personal preference goes to the last one (running on bare metal), but that's also because I run OpenBSD as my main OS, which is definitely well suited for this method. I also do not like the docker ecosystem as I think it trades too mhm security for usability (but that's a very personal take).

    In any case, happy self hosting my friend !

  • It does not hurt to create a VM or LXC for it. The overhead is not prohibitive. If things go astray, you can just kill the VM or LXC and restart from sketch. Though setup and management can be an issue as you have to install at least some software for VM management in your media PC.

  • One thing I think about is isolation. Do you want/need to strongly isolate the software and its data from the host operating system?

  • If you use a full VM you'll lose plenty of performance and I don't think it'll cope really well with domain names. If you really want to go the "keep everything separated" route use container software, like Docker. It'll use the same kernel as the host, so no weird networking rerouting/bridging etc.. I don't have any experience with containers, since I run all of my "homelab" bare metal on a Pi, and with this approach I never faced any issues. Containers could be useful if you were running something unorthodox like Gentoo and you need to run software that won't work on it, even if compiled to run, but it exist as a package on another distro. Then you can just spin up a container for that distro, install the software et voilà, you're ready to go. AFAIK there shouldn't be a package for lemmy on any distro, so just clone the source code and compile it, it should be fairly distro-agnostic. Maybe you could compile it in a container to keep your host clean of compile dependencies, but other than that, there's no real gain. I like to compile stuff, so having a shitload of dependencies already there is pretty handy for me, but for a production system, it's better to keep it clean.

16 comments