Skip Navigation

How do you guys back up your server?

I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?

120 comments
  • cronjobs with rsync to a Synology NAS and then to Synology's cloud backup.

  • I have everything in its own VM, and Proxmox has a pretty awesome built in backup feature. Three different backups (one night is to my NAS, next night to an on-site external, next night to an external that's swapped out with one at work - weekly). I don't backup the Proxmox host because reinstalling it should it die completely is not a big deal. The VM's are the important part.

    I have a mini PC I use to spot check VM backups once a month (full restore on its own network, check its working, delete the VM after).

    My Plex NAS only backs up the movies I really care about (everything else I can "re-rip from my DVD collection").

  • Various different ways for various different types of files.

    Anything important is shared between my desktop PC's, servers and my phone through Syncthing. Those syncthing folders are all also shared with two separate servers (in two separate locations) with hourly, daily, weekly, monthly volume snapshotting. Think your financial administration, work files, anything you produce, write, your main music collection, etc... It's also a great way to keep your music in sync between your desktop PC and your phone.

    Servers have their configuration files, /etc, /var/log, /root, etc... rsynced every 15 minutes to the same two backup servers, also to snapshotted volumes. That way, should any one server burn down, I can rebuild it in a trivial amount of time. This also goes for user profiles, document directories, ProgramData, and anything non-synced on windows PC's.

    Specific data sets, like database backups, repositories and such are also generally rsynced regularly, some to snapshotted volumes, some to regulars, depending on the size and volatility of the data.

    Bigger file shares, like movies, tv-shows, etc... I don't backup, but they're stored on a distributed GlusterFS, so if any one server goes down, that doesn't lose me everything just yet.

    Hardware will fail, sooner or later. You should see any one device as essentially disposable, and have anything of worth synced and archived automatically.

  • Hourly backups with Borg, nightly syncs to B2. I've been playing around with zfs snapshots also, but I don't rely on them yet

    • The method I use is same, but I wrote a script that makes snapshots, streams them locally and then rsync takes over and copies them over to my server at home.

  • I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone's happy.

    For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.

    For virtual servers I just use the proxmox built in backup system which works great.

    Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.

    I've also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it'll accept data I shove a copy of something on it, label and document it. There's so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I'll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.

  • For config files, I use tarsnap. Each server has its own private key, and a /etc/tarsnap.list file which list the files/directories to backup on it. Then a cronjob runs every week to run tarsnap on them. It's very simple to backup and restore, as your backups are simply tar archives. The only caveat is that you cannot "browse" them without restoring them somewhere, but for config files it's pretty quick and cheap.

    For actual data, I use a combination of rclone and dedup (because I was involved in the project at some point, but it's similar to Borg). I sync it to backblaze because that's the cheapest storage I could find. I use dedup to encrypt the backup before sending it to backblaze though. Restoration is very similar to tarsnap:

     undefined
        
    dup-unpack -k keyfile snapshot-yyyymmdd | tar -C / -x [files..] .
    
      

    Most importantly, I keep a note on how to backup/restore: Backup 101

  • I’m backing up my stuff over to Storj DCS (basically S3 but distributed over several regions) and it’s been working like a charm for the better part of a year. Quite cheap as well, similar to Backblaze.

    For me the upside was I could prepay with crypto and not use any credit card.

  • I run everything in containers, so I rsync my entire docker directory to my NAS, which in turn backs it up to the cloud.

  • I use Bacula to an external drive, it was a pain in the ass to configure but once it's running its super reliable and easily extended to other drives or folders

  • Cronjobs and rclone have been enough for me for the past year or so. Interestingly, I've only needed to restore from a backup once after a broken update. It felt great fixing that problem so easily.

  • My home servers a windows box so I use Backblaze which has unlimited storage for a reasonable fixed price. Have around 11TB backed up. Pay the extra few dollars for the extended 12 month retention of deleted files, which has saved me a few times when I needed to restore a file I couldn’t find.

    Locally I run stablebit DrivePool and content is mirrored and pooled using that, which covers me for drive failures.

  • Almost all the services I host run in docker container (or userland systemd services). What I back up are sqlite databases containing the config or plain data. Every day, my NAS rsyncs the db from my server onto its local storage, and I have Hyper Backup backup the backups into an encrypted S3 bucket. HB keeps the last n versions, and manages their lifecycle. It's all pretty handy!

  • ZFS array using striping and parity. Daily snapshots get backed up to another machine on the network. 2 external hard drives with mirrors of the backup rotate between my home and office weekly-ish.

    I can lose 2 hard drives from the array at the same time without suffering data loss. Any accidentally deleted files can be restored from a snapshot if my house is hit by a meteor I lose maximum of 3-4 days of snapshots.

120 comments