Hey everyone. So I’m trying to decide which RAID should I choose for my 6trays NAS. I have 4 x 16TB HDDs, 1x8TB HDD and another one 500GB ssd that I will use as a containers’ docker folder usage. I will be using the NAS to store Media files (movies, tv series, photos, music etc.) and also documents. Currently I have the 2 16tb as RAID 1 that only the Media files are stored and I am in between either creating another RAID 1 with the remaining 2 16Tbs or adding them to the other 2 to create a RAID 5 and have a bigger storage pool Have you had any incident where 2 HDDs were lost-damaged simultaneously (as RAID 5 forgives loss of only 1 drive) or not?

In addition I was thinking of having the 8TB HDD as a standalone to backup the documents and maybe the photos and the docker setups.

Does this make sense to anyone that uses similar setup?

Thanks for your inputs!

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    14 hours ago

    I use BTRFS w/ RAID 1 (mirror) with two drives (both 8TB), because that’s all I’ve needed so far. If I had four, I’d probably do to separate RAID 1 pairs and combine them into a logical volume, instead of the typical RAID 10 setup where blocks are striped across mirrored sets.

    RAID 5 makes sense if you really want the extra capacity and are willing to take on a little more risk of cascading failure when resilvering a new drive.

    ZFS is also a great choice, I just went w/ BTRFS because it’s natively supported by my OS (openSUSE Leap) with snapshots and rollbacks. I technically only need that for my root FS (SSD), but I figured I might as well use the same filesystem for the RAID array as well.

    Here’s what I’d do:

    1. 4x 16TB HDDs either in a RAID 10 or two RAID 1 pairs in one logical volume - total space is 32TB
    2. 500GB SSD -> boot drive and maybe disk cache
    3. 8TB HDD - load w/ critical data and store at work as an off-site backup, and do this a few times/year; the 4x HDDs are for bulk, recoverable data

    That said, RAID 5 is a great option as well, as long as you’re comfortable with the (relatively unlikely) risk of losing the whole array. If you have decent backups, having an extra 16TB could be worth the risk.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 hours ago

      That video is about hardware RAID. Software RAID is still alive and well (e.g. mdadm).

      I personally use BTRFS w/ RAID 1, and if I had OP’s setup, I’d probably do RAID 10. Just don’t use RAID 5/6 w/ BTRFS.

      ZFS isn’t the only sane option.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    Just remember if the data is important, raid is not backup. Have the media in another place. I have like 40-50 1tb drives I’ve copied stuff too in case everything goes belly up. Restoring won’t be fun but it will be possible.

    If it were me personally, I’d get two more 16 tb drives and Raid 6 the whole thing. But that’s only because you’ve got the NAS already. If I just had the drives I’d set up a JBOD and either use TrieNAS, Unraid, or maybe OpenMediaVault.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I use raid5 on an HP server, and am dealing with 2 drive failures, because HP’s qc is garbage. Raid6 would have saved me here, but it’s not really likely to have 2 drives fail simultaneously, so… yeah.

    I recommend looking into 321 backup, too. Raid isn’t a backup solution, but it seems you already understand that and are trying to do that.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    In the last 25 years working with approximately 700 servers that used RAID 5 I saw two of them lose an entire volume. Once was due to a malfunctioning HP RAID controller, and the other was due to a second disk dying while the rebuild from the first failure was still ongoing. There turned out to be a systemic problem with that drive model’s firmware which almost certainly contributed.

    So in my experience it’s rare but it definitely does happen.

    It can get worse. About 20 years ago the company I was at had an EMC tech yank the wrong power supply from a Symmetrix rack, where the other supply had earlier in the day caught fire! We lost that entire rack’s data (customer’s personal email accounts) due to data corruption. It was probably around 300 10k SCSI disks in that rack, a multimillion dollar expense at the time, and we had to restore all of it from tape over many, many days. Really, really sucked.