Skip Navigation

How old are the disks in your NAS?

I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they're a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they're not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x 'refurbed' 12TB enterprise drives and running ZFS RAIDZ1. So even though they'd have a decent amount of hours on them, they'd be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I'm not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it'll be nicer.

47 comments
  • Don't fill copy-on-write fs more that about 80%, it really slows down and struggles because new data is written to a new place before the old stuff is returned to the pool. Just sayin'.

    I wouldn't worry if you're backed up. The SMART values and daemon will tell you if one is about to die.

  •  
        
    $ for i in /dev/disk/by-id/ata-WD*; do sudo smartctl --all $i | grep Power_On_Hours; done
      9 Power_On_Hours          0x0032   030   030   000    Old_age   Always       -       51534
      9 Power_On_Hours          0x0032   033   033   000    Old_age   Always       -       49499
    
      
    • Once a year or so, I re-learn how to interpret Smart values, which I find frustratingly obtuse. Then I promptly forget .

      So one's almost 6 y/o and the other is about 5½?

      • Seagate "raw read error rate" is a terrifyingly big number if everything is hunky dory.

      • One has a total powered-on time of 51534 hours, and the other 49499 hours.
        As for their actual age (manufacturing date), the only way to know is to look at the sticker on the drive, or find the invoice, can't tell you right now.

  • I'm glad you asked because I've sort of been meaning to look into that.

    I have 4 8TB drives that have ~64,000 hours (7.3 years) powered on.
    I have 2 10TB drives that have ~51,000 hours (5.8 years) powered on.
    I have 2 8TB drives that have ~16,800 hours (1.9 years) powered on.

    Those 8 drives make up my ZFS pool. Eventually I want to ditch them all and create a new pool with fewer drives. I'm finding that 45TB is overkill, even when storing lots of media. The most data I've had is 20TB and it was a bit overwhelming to keep track of it all, even with the *arrs doing the work.

    To rebuild it with 4 x 16TB drives, I'd have half as many drives, reducing power consumption. It'd cost about $1300. With double parity I'd have 27TB usable. That's the downside to larger drives, having double parity costs more.

    To rebuild it with 2 x 24TB drives, I'd have 1/4 as many drives, reducing power consumption even more. It'd cost about $960. I would only have single parity with that setup, and only 21TB usable.

    Increasing to 3 x 24TB drives, the cost goes to $1437 with the only benefit being double parity. Increasing to 4*24TB gives double parity, 41TB, and costs almost $2k. That would be overkill.

    Eventually I'll have to decide which road to go down. I think I'd be comfortable with single parity, so 2 very large drives might be might be my next move, since my price per kWh is really high, around $.33.

    Edit: one last option, and a really good one, is to keep the 10TB drives, ditch all of the 8TB drives, and add 2 more 10TB drives. That would only cost $400 and leave me with 4 x 10TB drives. Double parity would give me 17TB. I'll have to keep an eye on things to make sure it doesn't get full of junk, but I have a pretty good handle on that sort of thing now.

  • I recently decommissioned my old poweredge T620. Beast of a thing, 5U heavy af. It had 8x10T drives and was the primary media server.

    Now that it is replaced I bought 2x Synology RS822+ and filled them with the old disks. Using SHR2. They are mixed brands bought at different times so I've made sure each NAS has a mix of disks.

    Lowest is 33k hours, highest is 83k.

  • I just got done swapping all my drives out. I had 6x8tb drives in raidz2. About 8 months ago I had some sort of read errors on one drive with about 33k hours on it. I started swapping my drives out with 20tb drives one at a time, and just finished last week. So now I have 6x20tb drives with between 200 and 6k hours on them. The most hours on any of my older drives was about 40k, but other than a couple minor errors on the one drive, I'd had no issues with any of them. I've held onto all of the old drives, and was planning on setting up a second nas with 4x8tb drives in raidz1 to use as a backup server.

    This was my second time replacing all my drives. My NAS is a bit like the ship of Theseus at this point, as it's gone through many upgrades over the years. Started out with 6x3tb drives, and after about 4 years swapped the drives with 8tb units. About 5 years later (where we are now) it's now 20tb drives. I've also swapped the chassis, mobo, CPU, and everything else out multiple times, etc.

    My original setup was a mixture of desktop and Nas drives, but I've since been running all Nas/Enterprise drives. Based on my personal experience, it seems like I'll replace drives every 4-5 years, regardless of actual failures... Both times I started the drive swaps there were read/write errors or sector failures on a drive in the pool. However, at around the same time I needed more space, so it was a convenient enough excuse to upgrade drive size.

    As far as your concern about cramming drives into the chassis, always worth considering swapping chassis's, but that's up to you. I think 6 drives in Z2 is pretty happy compromise for number of drives and reliability. Thankfully your storage capacity is low enough you can pretty easily transfer everything off of that Nas to some interim storage location while you make whatever changes you want to.

    Part of the reason I want to repurpose my old drives into another server is so I can have enough backup storage for critical files, etc should I need to start over with my main Nas.

    • 2x18k - mirrored ZFS pool.
    • 1x47k - 2.5" drive from an old laptop used for torrents, temp data, non-critical pod volumes, application logs etc.
    • 1x32k - automated backups from ZFS pool. It's kinda partial mirror of the main pool.
    • 1x18k - (NVME) OS drive, cache volumes for pods.

    Instead of single pool, I simply split my drives into tiers: cache, storage, and trash due to limited drive counts. Most R/W goes to the cheap trash and cache disks instead of relatively new and expensive NAS drives.

47 comments