Hi!
I used to have three raid1:
2 x 4Tb Ssd dedicated to store personal data
2 x 6Tb HDD dedicated to store “iso’s”, the eye patched ones.
2 x 4Tb ssd for backup.
Ext4 everywhere.
I run this setup for years, maybe even 20 years (with many different disks and sizes over time).
I decided that was time to be more efficient.
I removed the two HDD, saving quite a lot of power, and switched the four sdd to RAID5, Then put BTRFS on top of that. Please note, I am not using btrfs raid feature, but Linux mdadm software raid (which I have been using rock solid for years) with btrfs on top as if on a single drive.
I choose MD not only for my past very positive experience, but specially because I love how easy is to recover and restore from many issues.
I choose not to try zfs because I don’t feel comfortable in using out of kernel drivers and I dislike how zfs seems to be RAM hungry.
What do you guys think?
It nearly certainly happened to you, but you are simply not aware as filesystems like ext4 are completely oblivious to it happening and for example larger video formats are relatively robust to small file corruptions.
And no, this doesn’t only happen due to random bit flips. There are many reasons for files becoming corrupted and it often happens on older drives that are nearing the end of their life-span and good management of such errors can expand the secure use of older drives significantly. And it can also help mitigate the use of non-ECC memory to some extend.
Edit: And my comment regarding mdadm Raid5 was about it requiring equal sized drives and not being able to shrink or expand the size and number of drives on the fly, like it is possible with btrfs raids.