I've got this working in sdm now. As I was working through it I observed that the FAT partition got changed to EFI after the 'set 2 lvm on'. I "fixed" this by resetting the partition type code after the pvcreate/vgcreate/lvcreate sequence:More than a couple of days - makes note to self never to attempt free time predictions for the foreseeable future!
I did notice after posting....whereas on another rpi its..Code:
1 4194kB 516MB 512MB fat16 bootfs boot, esp
Can someone enlighten me as to the correct mkfs incantation to achieve the latter? I'm guessing the "Flags" don't matter much but it doesn't want to be fat16 methinks.Code:
1 4194kB 567MB 563MB fat32 bootfs msftdata
As to the advantages I guess an overview of a typical lvm installation is the best place to start (to point out the big downside). Your typical redhat server would have /boot/ as ext filesystem then the rest lvm. Let's ignore hardware raid and consider it in terms of software raid. Pseudodata..Once "initrd" is loaded, everything else is named in terms of lvm. That is, "initrd" will contain something like..Code:
#blockA/dev/sda grub MBR (master boot record)/dev/sdb grub MBR/dev/sda1 mdadm raid1 /dev/md1/dev/sdb1 mdadm raid1 /dev/md1/dev/md1 ext /boot/ #initrd
Thus rootfs is found on logical volume (LV) "lv00" on volume group (VG) "vg00". The swapfile might be /dev/mapper/vg00-lv01 for instance. Once lvm is active the "mapper" stuff can be dispensed with so we can say /dev/vg00/lv00 and /dev/vg00/lv01 and so forth.Code:
#blockB/dev/mapper/vg00-lv00 / #rootfs
Fortunately we can dispense with blockA on rpi as that's /boot/firmware/ and (imo) using mdadm on an rpi is pointless. The downside is in blockB because we're using names. They can collide. Consider a scenario where your machine is using /dev/vg00/lv00 and you need to rescue a disk. You insert it only to discover it too has a /dev/vg00/lv00. The solution is to use the hostname as a the VG prefix, hence /dev/mapper/pi23-rootfs (aka /dev/pi23/rootfs) in my original post. This solution is only a convention though, for backward compatibility. I still have boxes using the old convention!
The advantages are..(1) The physical volume concept (PV) allows another (many) disk(s) to be assigned to a VG. This permits a running system to be moved onto another device while the OS is running.
- (1) Movement of disks (live)
(2) Increase in filesystem size (live)
(3) Decrease in filesystem size (live if it can be unmounted)
(4) Snapshots (think databases)
(5) Mirroring
(2) While a VG has free space, any LV within it can be expanded (and its filesystem) while the OS is running. If the VG has no space then add another disk as per (1).
(3) Bit of a shame the filesystem has to be unmounted although in practice not the problem new lvm users think it is.
(4) Stop a service/app, create a snapshot, start the service/app. Do a cold backup at your leisure. When the cold backup is done, delete the snapshot.
(5) I know little about this (lvm certainly can mirror). I did experiment with it years ago but wasn't able to progress far enough to be certain how a recovery happens. It was one of those business cases involving oracle rman - small (4)(5) stick.
I image btrfs can do all this and likely more, given it's more recent. I figured an overview might be helpful, moving forward(*).
(*) Management speak for "I still haven't had time to do anything". The rpi5 has been sat there idle.
Code:
sgdisk --typecode 1:0700 $dstdev >/dev/null #Redo this b/c lvm on changes p1 type to EFI (?)

I've got a few "life" things going on so I won't be publishing it for a week or so.
Statistics: Posted by bls — Thu Mar 07, 2024 7:47 pm