Quantcast
Channel: Raspberry Pi Forums
Viewing all articles
Browse latest Browse all 4832

Beginners • Re: These nvme base hats look interesting but do not have access to them, perhaps someone here can answer a question I h

$
0
0
I have used software raid (mdadm) on both my ancient HP servers since day one (albeit raid6). The performance there is acceptable.There was a lengthy thread on mdadm but I can't find it. It's rather moot though because you kind of have to have what industry calls "a business case" to implement such a thing. By all means experiment but expect to lose your data. Indeed you *have* to else you've not designed it properly - because unless you've broken it in all manner of ways you'll never fix it.

My case was this:
a) I didn't want to purchase spare hardware(*) raid cards.
b) The server bottlenecks is gigabit ethernet.
c) Downtime needed to be minimal.

Even so, I still had to compromise on (c) as it turned out the servers did not support hot swappable disks. They did but that's a digression we don't need here.

You have to tune mdadm to your requirements. Sure it'll work out of the box but badly for most folk. The "tl;dr" is everything seems fine until folk reboot whereupon the array slows to a crawl. I don't know how this manifests an an rpi but I'm willing to bet this is going to look like the system has hung on boot. Rather than wait for (what is likely a full resync going on) they'll reboot with a good chance "fsck" will be hammering the array now. Another restart stands a good chance of killing it. Raid is for systems which are never turned off, never suffer power outages and never break.

(*) If you've ever seen IBM floundering around trying to fix one of their SAN which they themselves broke you'll not be a fan of hardware raid if software raid will suffice. It wasn't even a hardware fault, merely an unwanted firmware update.

I used to use mdadm raid1 on my desktop PC. Not since solid state drives. On large SAN even with spinning rust you had to cater for disks killing the array when one failed. The naive would slap a new disk into their SAN, slap themselves on the back for planning the resync during quiet time and leave for the weekend only to be called in because the SAN had destroyed itself. You really need to be replacing working disks periodically(**) because if they're all the same age there's a good chance more will fail during the the resync.

(**) These says, at least two SAN replicating each other with failover.

Chances of solid state drives failing at the same time are so much greater. It's this which you need to bear in mind for home use. I'd create two partitions on each nvme drive. Create raid1 on one nvme drive using both partitions with the other nvme drive pair of partitions set as hot spares. Run it like that for at least three months then force fail one of the active partitions. I'd probably keep 20% of each drive unformatted also (another debate).

Statistics: Posted by swampdog — Sat Sep 28, 2024 6:28 am



Viewing all articles
Browse latest Browse all 4832

Trending Articles