Homelab upgrade - "Modern" alternatives to NFS, SSHFS?
Hi all!
I will soon acquire a pretty beefy unit compared to my current setup (3 node server with each 16C, 512G RAM and 32T Storage).
Currently I run TrueNAS and Proxmox on bare metal and most of my storage is made available to apps via SSHFS or NFS.
I recently started looking for "modern" distributed filesystems and found some interesting S3-like/compatible projects.
To name a few:
MinIO
SeaweedFS
Garage
GlusterFS
I like the idea of abstracting the filesystem to allow me to move data around, play with redundancy and balancing, etc.
My most important services are:
Plex (Media management/sharing)
Stash (Like Plex 🙃)
Nextcloud
Caddy with Adguard Home and Unbound DNS
Most of the Arr suite
Git, Wiki, File/Link sharing services
As you can see, a lot of download/streaming/torrenting of files accross services. Smaller services are on a Docker VM on Proxmox.
Currently the setup is messy due to the organic evolution of my setup, but since I will upgrade on brand new metal, I was looking for suggestions on the pillars.
So far, I am considering installing a Proxmox cluster with the 3 nodes and host VMs for the heavy stuff and a Docker VM.
How do you see the file storage portion? Should I try a full/partial plunge info S3-compatible object storage? What architecture/tech would be interesting to experiment with?
Or should I stick with tried-and-true, boring solutions like NFS Shares?
Most of the things you listed require some very specific constraints to even work, let alone work well. If you're working with just a few machines, no storage array or high bandwidth networking, I'd just stick with NFS.
By default, unencrypted, and unauthenticated, and permissions rely on IDs the client can fake.
May or may not be a problem in practice, one should think about their personal threat model.
Mine are read only and unauthenticated because they're just media files, but I did add unneeded encryption via ktls because it wasn't too hard to add (I already had a valid certificate to reuse)
It is a pain to figure out how to give everyone the same user id. I only have a couple computers at home. I've never figured out how to make LDAP work (including laptops which might not have network access when I'm on the road). Worse some systems start with userid 1000, some 1001. NFS is a real mess - but I use it because I haven't found anything better for unix.
Gluster is shit really bad, garage and minio are great. If you want something tested and insanely powerful go with ceph, it has everything. Garage is fine for smaller installations, and it's very new and not that stable yet.
I had great experience with garage at first, but it crapped itself after a month, it was like half a year ago and the problem was fixed, still left me with a bit of anxiety.
Your workload just won't see much difference with any of them, so take your pick.
NFS is old, but if you add security constraints, it works really well. If you want to tune for bandwidth, try iSCSI , bonus points if you get zfs-over-iSCSI working with tuned block size. This last one is blazing fast if you have zfs at each and you do Zfs snapshots.
Beyond that, you're getting into very tuned SAN things, which people build their careers on, its a real rabbit hole.
I'm using ceph on my proxmox cluster but only for the server data, all my jellyfin media goes into a separate NAS using NFS as it doesn't really need the high availability and everything else that comes with ceph.
It's been working great, You can set everything up through the Proxmox GUI and it'll show up as any other storage for the VMs. You need enterprise grade NVMEs for it though or it'll chew through them in no time. Also a separate network connection for ceph traffic if you're moving a lot of data.
I use Ceph/CephFS myself for my own 671TiB array (382TiB raw used, 252TiB-ish data stored) -- I find it a much more robust and better architected solution than Gluster. It supports distributed block devices (RBD), filesystems (CephFS), and object storage (RGW). NFS is pretty solid though for basic remote mounting filesystems.
If you want to try something that’s quite new and mostly unexplored, look into NVMe over TCP. I really like the concept, but it appears to be too new to be production ready. Might be a good fit for your adventurous endeavors.
This is just block device over network, it will not allow the use cases OP is asking for. You will still need a filesystem and a file-serving service on top of that.
I agree, but it’s clear that OP doesn’t want a real solution, because those apparently are boring. Instead, they want to try something new. NVMe/TCP is something new. And it still allows for having VMs on one system and storage on another, so it’s not entirely off topic.
I've used MinIO as the object store on both Lemmy and Mastodon, and in retrospect I wonder why. Unless you have clustered servers and a lot of data to move it's really just adding complexity for the sake of complexity. I find that the bigger gains come from things like creating bonded network channels and sorting out a good balance in the disk layout to keep your I/O in check.
I preach this to people everywhere I go and seldom do they listen. There's no reason for object storage for a non-enterprise environment. Using it in homelabs is just...mostly insane..
Generally yes, but it can be useful as a learning thing. A lot of my homelab use is for purposes of practicing with different techs in a setting where if it melts down it's just your stuff. At work they tend to take offense of you break prod.
Fam, the modern alternative to SSHFS is literally SSHFS.
All that said, if your use case is mostly downloading and uploading files but not moving them between remotes, then overlaying webdav on whatever you feel comfy on (and that's already what eg.: Nexctloud does, IIRC) should serve well.