Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HU
Posts
0
Comments
16
Joined
2 yr. ago
  • Definitely possible! But as the other commenter's have pointed out, there are some costs/tradeoffs to be aware of. I'll start by answering your questions. Power consumption could technically be less sharing a system due to less overhead (only one mobo, ram, etc), but power is mostly CPU/GPU, so I don't think you'd see a huge difference. Furthermore, always on VM vs sleeping/turning off when you're not using it should have marginal effects. Another commenter mentioned it, but always on isn't a problem. Sustained elevated drive temperatures can be an issue, but really you're looking at elevated CPU/GPU temps which won't be an issue. The bigger issue is temperature cycling, but even then consumer hardware is derated to last 10-20 years as long as you aren't doing overvoltage and you keep up with periodic repaste/repadding (every 5 years or so is typically recommended). Finally for turning on your VM, I'd recommend just leaving it on. Alternatively, you could send an ssh command as you stated.

    Having a a hypervisor server with VMs is very common and well documented if you only want VNC/ssh. Regardless, any server maintenance/reboots will also obviously disrupt the desktop. Additionally, VNC doesn't support audio. I believe Windows remote desktop has audio, but I'm not sure about quality.

    To get improved video/audio, you'll need a GPU. Once you add a GPU, things get trickier. First, your host/server will try to use the GPU. There are ways to force the host to not use the GPU, but that can be tricky. Alternatively, you can look into VFIO which hands off the GPU to the VM when turned on. However, this is even trickier. Last, you can install two GPUs (or use iGPU/APU if applicable). Then you can pass the GPU through. Last I looked, NVIDIA and AMD are both possible options and this is now easier than ever. Regardless, if you plan on gaming, you should know some games will block you for playing in a VM (specifically games with anticheat). All that said, desktop/server has some drawbacks, but is still a great option. Your next step is choosing your hypervisor.

  • I've tried motioneye, zoneminder, shinobi cctv, blue iris, and frigate NVR.

    I couldn't get motioneye to work, but I'll blame on me being a noob (especially at the time).

    Zoneminder was stable but the UI is a bit weak and it doesn't have person detection to my knowledge. You can get around the UI by using homeassistant as a front end.

    Shinobi cctv has the best UI, but I found it to be a buggy mess, person detection was difficult to implement, and it didn't play nice with homeassistant.

    Blue iris is solid, but requires a license and windows. I have the least experience with it, but it seemed decent.

    Ultimately, I landed on frigate NVR and it's my favorite so far. Its very solid/stable, has built in object/person detection with simple support for hardware acceleration, and UI is simple but passable. Personally, I use homeassistant as a front end for WAF, but the built in UI isn't bad and shows all your person detection events. Also, compared to all the above, configuration is done through a text file. While this may seem daunting at first, the manuals are very good and it becomes copy paste after the first camera (makes backups easy too).

    For hardware, frigate has recommendations on their site. A cheap PC will do the job with ideally an Intel processor for hardware acceleration. For cameras, I've had the best luck with amcrest. Just make sure you throw whatever cameras you get on their own restricted vlan with no internet access. Feel free to reach out if you have any other questions.