Skip Navigation

What is your linux backup strategy?

Can you please share your backup strategies for linux? I'm curious to know what tools you use and why?How do you automate/schedule backups? Which files/folders you back up? What is your prefered hardware/cloud storage and how do you manage storage space?

105 comments
  • I use rsync to incrementally back up / to a separate drive, as well as a drive on another device (my server), which then packs, compresses and encrypts the latest backup of all devices daily, and uploads them to Hetzner as well as GDrive.

  • I was talking with a techhead from the 80s about what he did when his tape drives failed and the folly that is keeping data alive on a system that doesn't need to be. His foolproof backup storage is as follows.

    1. At Christmas buy a new hard drive. If Moore's law allows, it should be double what you currently have
    2. Put your current backup hardrive into a SATA drive slot. Copy over backup into new hard drive.
    3. Write with a sharpie the date at which this was done on the harddrive. The new hard drive is your current backup.
    4. Place the now old backup into your drawer and forget about it.
    5. On New Years Day, load each of the drives into a SATA drive slot and fix any filesystem issues.
    6. Put them back into the drawer. Go to step 1.
  • Synology NAS. I really love that thing. I use their synology drive software to backup the Linux home folder, as well as windows PCs, iPads, iPhones etc. I use their photos mobile software to automatically backup phone photos and videos. I also synchronize a few select folders between PCs so certain in-use files are always up to date. I set the NAS to keep 30 old versions of every file. This works great for my college kids - dad has a copy of everything in case they nuke a paper or something (which has happened).

    I stopped cloning drives long ago. Now I just reinstall the os and packages. With Linux, this is honestly faster than deploying a backup - a single pacman command installs everything I want. Then I just log into things as I open them. Ya I might have to futz around with some settings or redownload some big games on steam - but the eye candy and games can wait - I can be productive pretty quickly after an install.

    I DO use btrfs with automatic snapshots (snapper and btrfs assistant). This saves me from myself when I bork an update (which I’ve done more than once). If I make a mistake, I just rollback a snapshot, and try again without my stupid mistakes. This has saved my install 3 or 4 times now.

    Lastly, I sneaker net an external hard drive to my office. On it is a manual backup of the NAS. I do this once per month. This protects from catastrophic failures like my house burning down. I might lose a month or so of pictures in the worst case scenario, but I still have my 25+ years of pictures of my kids, wedding videos, etc.

    In the end, the only thing that really matters is not losing my lifetime of family pictures and the good memories they provoke.

    • Synology NAS here also, divided into private (family stuff, docker volumes etc) and public (Linux ISOs and anything that can be redownloaded). Both get backed up weekly to an older NAS with Hyper Backup. Private additionally goes onto a LUKS encrypted drive monthly which is spot-checked, taken offsite, and the previous offsite drive brought back. I don't back up any PC (don't care, just reinstall) or phones (they are backed up on iCloud).

  • Scuse the cut and paste, but this is something I recently thought quite hard about and blogged, so stealing my own content:

    What to back up? This is a core question to ask when you start planning. I think it’s quite simply answered by asking the secondary question: “Can I get the data again?” Don’t back up stuff you downloaded from the public internet unless it’s particularly rare. No TV, no Movies, no software installers. Don’t hoard data you can replace. Do back up stuff you’ve personally created and that doesn’t exist elsewhere, or stuff that would cause you a lot of effort or upset if it wasn’t available. Letters you’ve written, pictures you’ve taken, code you authored, configurations and systems that took you a lot of time to set up and fine tune.

    If you want to be able to restore a full system, that’s something else and generally dealt best with imaging – I’m talking about individual file backups here!

    Backup Scenario Multiple household computers. Home linux servers. Many services running natively and in docker. A couple of windows computers.

    Daily backups Once a day, automate backups of your important files.

    On my linux machines, that’s things like some directories like /etc, /root, /docker-data, some shared files.

    On my windows machines, then that’s some mapping data, word documents, pictures, geocaching files, generated backups and so on.

    You work out the files and get an idea of how much space you need to set aside.

    Then, with automated methods, have these files copied or zipped up to a common directory on an always-available server. Let’s call that /backup.

    These should be versioned, so that older ones get expired automatically. You can do that with bash scripts, or automated backup software (I use backup-manager for local machines, and backuppc or robocopy for windows ones)

    How many copies you keep depends on your preferences – 3 is a sound number, but choose what you want and what disk space you have. More than 1 is a good idea since you may not notice the next day if something is missing or broken.

    Monthly Backups – Make them Offline if possible

    I puzzled a long time over the best way to do offline backups. For years I would manually copy the contents of /backup to large HDDs once a month. That took an hour or two for a few terabytes.

    Now, I attach an external USB hard drive to my server, with a smart power socket controlled by Home Assistant.

    This means it’s “cold storage”. The computer can’t access it unless the switch is turned on – something no ransomware knows about. But I can write a script that turns on the power, waits a minute for it to spin up, then mounts the drive and copies the data. When it’s finished, it’ll then unmount the drive and turn off the switch, and lastly, email me to say “Oi, change the drives, human”.

    Once I get that email, I open my safe (fireproof and in a different physical building) and take out the oldest of three usb Caddies. Swap that with the one on the server and put that away. Classic Grandfather/Father/Son backups.

    Once a year, I change the oldest of those caddies to “Annual backup, 2024” and buy a new one. That way no monthly drive will be older than three years, and I have a (probably still viable) backup by year.

    BTW – I use USB3 HDD caddies (and do test for speed – they vary hugely) because I keep a fair bit of data. But you can also use one of the large capacity USB Thumbdrives or MicroSD cards for this. It doesn’t really matter how slowly it writes, since you’ll be asleep when it’s backing up. But you do really want it to be reasonably fast to read data from, and also large enough for your data – the above system gets considerably less simple if you need multiple disks.

    Error Check: Of course with automated systems, you need additional automated systems to ensure they’re working! When you complete a backup, touch a file to give you a timestamp of when it was done – online and offline. I find using “tree” to catalogue the files is worthwhile too, so you know what’s on there.

    Lastly – test your backups. Once or twice a year, pick a backup at random and ensure you can copy and unpack the files. Ensure they are what you expect and free from errors.

  • etckeeper, and borg/vorta for /home

    I try to be good about everything being installed in packages, even if Im the one that made the package. that means I only have to worry about backing up my local package archive. but Ive never actualy recreated a personal system from a backup, and usually end up starting from a fresh install, slowly adding back things from the backup if I missed them. this tends to cut down on cruft and no longer needed hacks and fixes. also makes for a good way to be exposed to new paradigms (desktop environments, shells, etc)

    something that helps is daily notes. one file for any day Im working on my system and want to remember what a custom file, confg edit, or downloaded/created package does and why. these get saved separately and I try to remember to grep them before asking the internet

    i see the benefit to snapshots, but disk space is expensive, and Im (usually) careful (enough) not to lock myself out or prevent boots. anything catastophic I have to fix is usually seen as a fun, stressful learning experience! that rarely happens anymore, for better or for worse

  • Example of a Bash script that performs the following tasks

    1. Checks the availability of an important web server.
    2. Checks disk space usage.
    3. Makes a backup of the specified directories.
    4. Sends a report to the administrator's email.

    Example script:

     undefined
        
    #!/bin/bash
    
    # Settings
    WEB_SERVER="https://example.com"
    BACKUP_DIR="/backup"
    TARGET_DIRS="/var/www /etc"
    DISK_USAGE_THRESHOLD=90
    ADMIN_EMAIL="admin@example.com"
    DATE=$(date +"%Y-%m-%d")
    BACKUP_FILE="$BACKUP_DIR/backup-$DATE.tar.gz"
    
    # Checking web server availability
    echo "Checking web server availability..."
    if curl -s --head $WEB_SERVER | grep "200 OK" > /dev/null; then
    echo "Web server is available."
    else
    echo "Warning: Web server is unavailable!" | mail -s "Problem with web server" $ADMIN_EMAIL
    fi
    
    # Checking disk space
    echo "Checking disk space..."
    DISK_USAGE=$(df / | grep / | awk '{ print $5 }' | sed 's/%//g')
    if [ $DISK_USAGE -gt $DISK_USAGE_THRESHOLD ]; then
    echo "Warning: Disk space usage exceeded $DISK_USAGE_THRESHOLD%!" | mail -s "Problem with disk space" $ADMIN_EMAIL
    else
    echo "There is enough disk space."
    fi
    
    # Creating backup
    echo "Creating backup..."
    tar -czf $BACKUP_FILE $TARGET_DIRS
    
    if [ $? -eq 0 ]; then
    echo "Backup created successfully: $BACKUP_FILE"
    else
    echo "Error creating backup!" | mail -s "Error creating backup" $ADMIN_EMAIL
    fi
    
    # Sending report
    echo "Sending report to $ADMIN_EMAIL..."
    REPORT="Report for $DATE\n\n"
    REPORT+="Web server status: $(curl -s --head $WEB_SERVER | head -n 1)\n"
    REPORT+="Disk space usage: $DISK_USAGE%\n"
    REPORT+="Backup location: $BACKUP_FILE\n"
    
    echo -e $REPORT | mail -s "Daily system report" $ADMIN_EMAIL
    
    echo "Done."
    
      

    Description:

    1. Check web server: Uses curl command to check if the site is available.
    2. Check disk space: Use df and awk to check disk usage. If the threshold (90%) is exceeded, a notification is sent.
    3. Create a backup: The tar command archives and compresses the directories specified in the TARGET_DIRS variable.
    4. Send a report: A report on all operations is sent to the administrator's email using mail.

    How to use:

    1. Set the desired parameters, such as the web server address, directories for backup, disk usage threshold and email.
    2. Make the script executable:
     undefined
        
    chmod +x /path/to/your/script.sh
    
      
    1. Add the script to cron to run on a regular basis:
     undefined
        
    crontab -e
    
      

    Example to run every day at 00:00:

     undefined
        
    0 0 * * * /path/to/your/script.sh
    
      
  • My laptop has a microsd card reader that when filled is almost flush so i just keep a micro sd card in there and have timeshift back up to it. Partitioned with full disk encryption so it cant just be stolen and scanned.

  • My desktop, laptop and homelab all synd my important stuff over syncthing. They all do btrfs snapshots three months back in case an oopsie would propagate.

    The homelab additionally fetches deduplicated snapshots of my VPS weekly, before syncing all of the above to an encrypted hetzner storage for those burning-down-the-house events.

  • I use Duplicity to backup my home directory, excluding Steam and Downloads folders. It is setup to backup weekly to my NAS mounted as NFS. The NAS has a weekly cron task to upload the backups to pCloud using rclone. I backup this way, several computers (2 desktop, 2 laptop, the NAS as well). The files included in this strategy are essentially my photos, documents and configs. My software installations, games, media library are not backed up.

  • Dot files on github, an HHD for storing photos, downloads, documents as well as my not in use games. I also sync keepass files across all network devices.

  • 321

    Kopia backup to secondary HDD

    • Pictures (phone photos backed up to my server via immich)
    • workspace (git repos, ECAD, MCAD, firmware, etc...)
    • qmk layout
    • Documents
    • vim folder with bundles
    • ebooks

    KDE vaults stores on secondary HDD

    Soon I will set up kopia to also back up every via SSH to my server and then small size essentials and important docs via google drive

    I need to set server cloud backups too, but haven't had the time...

  • Keep everything on Nextcloud and back that up via Proxmox Backup Server.

    Nuke and pave takes me less time to reconfigure Plasma and install NC client than bothering to back anything up directly.

  • Firstly, for my dotfiles, I use home-manager. I keep the config on my git server and in theory I can pull it down and set up a system the way I like it.

    In terms of backups, I use Pika to backup my home directory to my hard disk every day, so I can, in theory, pull back files I delete.

    I also push a core selection of my files to my server using Pika, just in case my house burns down. Likewise, I pull backups from my server to my desktop (again with Pika) in case Linode starts messing me about.

    I also have a 2TiB ssd I keep in a strongbox and some cloud storage which I push bigger things to sporadically.

    I also take occasional data exports from online services I use. Because hey, Google or Discord can ban you at any time for no reason. :P

  • When I researched this previously I concluded that there are two very good options for regular backups: Borg and Restic. These are especially efficient at backing up a diff of what has changed since the last backup. So you get snapshots of your filesystem state at each backup point without using a huge amount of space. You can mount any snapshot as a virtual directory. After the initial backup, incremental backups take a minute or two.

    I use Borg, and I back up to cloud storage on Borgbase. I use Vorta as a GUI for Borg. I have Vorta start automatically when I start my window manager, and I have it set up for daily backups. I set up the same thing on my kid's computer.

    I back up my home directory. I have some excluded directories like ~/.cache, and Steam's data directory. I use Baobab to find large directories that I don't want backed up.

    I use the "exclude caches" option in the Borg "create archive" settings. That automatically excludes Rust target/ directories because they follow the Cache Directory Tagging Specification. Not all programming languages' tooling follows that spec so I also use directory name pattern excludes. For example I have an exclude pattern for .*/node_modules/.*

    I use NixOS, and I keep my system config in a git repo so I don't need backups for anything outside my home directory.

  • I really make backups only a few times. I have the configuration files of my systems on my GitHub and Codeberg. The rest, I don't need; the only things I keep are books and music that I download from the internet, which I have on a 1TB external hard drive.

    When I have made a backup for a specific reason, I have done it with rsync. It's a tool that works quite well and is for the command line.

  • I leverage btrfs or ZFS snapshots. I take rolling system level snapshots on a schedule (daily, weekly, monthly and separately before any package upgrades or installs) and user data snapshots every couple of hours. Then I use btrbk to sync those snapshots to an external drive at least once a week. When I have all of my networking gear and home services setup I also sync all of this to storage on my NAS. Any hosts on the network keep rolling snapshots stored on the NAS as well.

    Important data also gets shoveled into a B2 bucket and/or Google drive if I need to be able to access it from a phone.

    I keep snapshots small by splitting data up into well defined subvolumes, anything that can be reacquired from the cloud (downloads, package caches, steam libraries, movies, music, etc) isn't included in the backup strategy. If I download something and it's hard to find or important I move it out of downloads and into a location that is covered by my backups.

  • restic to a local server and to cloud storage. it varies by device, but usually just everything in /home/. The rest of the operating system should be reproducible, whether through images, ansible, nix, or guix, given the information in /home/.

    scheduling is done through systemd, usually (or the non-systemd equivalent). I use BackBlaze now, but I switch around occasionally. restic has policy based snapshot removal, and a prune option.

105 comments