Quantcast
Viewing all articles
Browse latest Browse all 4921

Advanced users • Re: RAID 1 (mirror) with two NVMe

Did I miss something - wouldn't the md module be sitting within the initramfs (ie: why the old 0.90 version requirement)?

This can be tested by anyone with a single nvme drive (I don't have one) lest it's nvme specific: just bring up the raid1 array "degraded" for those not familiar with mdadm - split the nvme drive into multiple partitions and have both raid1 partitions on the same nvme drive (yes it's silly but mdadm doesn't care).

Off the top of my head (beware typos)..

Code:

/dev/nvme0n1p1/boot/firmware//dev/nvme0n1p2 /dev/md1/dev/nvme0n1p3 /dev/md1
..where both "/dev/nvme0n1p2" and "/dev/nvme0n1p3" are the same size obviously. If that can be made to boot then I see no reason why a pair of nvme drives wouldn't work. You'd create identical partitioning scheme on "/dev/nvme1n1" then add "/dev/nvme1n1p2" as a hot spare to /dev/md1. Then force fail "/dev/nvme0n1p3" and wait for /dev/md1 to be resync'd. Now you have..

Code:

/dev/nvme0n1p1/boot/firmware//dev/nvme0n1p2 /dev/md1/dev/nvme0n1p3 (failed)/dev/nvme1n1p1(empty)/dev/nvme1n1p2 /dev/md1/dev/nvme1n1p3 (empty)
You can now remove "/dev/nvme0n1p3" and assign it to /dev/md2 along with "/dev/nvme1n1p3" where /dev/md2 is your data. We're ignoring /dev/md0 here because that's the rpi "/boot/firmware" (ie linux /boot/). Ignore this part of the explanation if it makes no sense and as a hint, ignore any google results which mention /dev/md127 which leaves you with /dev/md127 as the running array. I'm trying to gloss over this paragraph because it is unimportant considering what comes next.

Almost always, you'll find any sane mdadm setup imposes another filesystem on top of mdadm. LVM historically but btrfs etc are perfectly valid as those modules will be in your initramfs also.

A critical spinning rust 24/7 linux environment is often..

Code:

/dev/md0/boot/(raid1)/dev/md1 (OS)(raidN)/dev/md2(data)(raidN or raidM)
..where /dev/md0 and /dev/md1 are on one set of disks and /dev/md2 on another set. Common historic error with /dev/md0 is to forget to install GRUB on the second raid1 disk: most of grub is in /dev/md0 but the MBR isn't on the second disk. The rpi has an advantage here because it is blindingly obvious if "/dev/nvme0n1p1" fails it ain't gonna boot, the solution being to rsync "/dev/nvme0n1p1" onto "/dev/nvme1n1p1" at appropriate times.

[damn, got interrupted, lost the plot a bit]

Don't forget with mdadm, use an "internal intent bitmap" (mdadm -D).

Statistics: Posted by swampdog — Wed Nov 27, 2024 6:56 pm



Viewing all articles
Browse latest Browse all 4921

Trending Articles