Oram/Pi5 Boot from NVMe?

Test and error ?? :nerd_face: :rofl: :rofl:

FYI, for reaching the right SD-card bus routing, i designed and ordered this test board:

Of course, you learn a lot on the way, but this kind of procedure is slow and time-consuming. Luckily i rarely had to do this kind of things until now :sweat_smile:

All the best,

3 Likes

Yeah . You deserved a gold medal for that.

They where a lot of NVMe drive extensions boards annouced. They needs a perfect line length matching. I just wonder if Raspberry Pi Corp. has not some NDA documents to share with third party accessories builder companies.

I have written a script that copies a running instance of zynthian from uSD to NVMe.

  • Attach NVMe to zynthian
  • Boot zynthian using uSD as normal
  • Run the script
  • Power down the zynthian
  • Remove the uSD
  • Power up

Hopefully you have the same zynthian with all its config and files but now running from NVMe. The boot order is uSD->NVMe->USB so that you can plug a uSD card in and use that instead. This slows down NVMe boot whilst it looks for a uSD but it feels the safest. You can change to boot from NVMe first with raspi-config nonint do_boot_order B2 but if your NVMe stops working you may need to remove it to allow to boot from uSD. This may happen if the RPi can find the NVMe but it cannot boot from it.

You can copy the script to your zynthian and change its permissons to allow it to be executed or you can launch it with the sh command. I have tested this on my RPi5 a few times and it works but YMMV. Use at your own risk. I stongly recommend backing up your work before doing this.

zynthian_sd2nvme.sh (1.7 KB)

We could add this as a feature, e.g. from webconf / admin menu but that feels a bit dangerous.

8 Likes

Thanks @riban, I will try that as soon as I add nvme to my 5.1 kit, as a possible alternative to my so far tested procedure.

Just in case, would it suffice to run in /root on terminal or via ssh:

$ bash zynthian_sd2nvme.sh?

Or would I have to arm some other CLI provision?

Yes, it should work after you have copied the script to zynthian, e.g. with scp.

1 Like

Ah, !!!
I’ve just received my first RBPi 5, but the 2242 NVME drive and its hat adapter board are still on the way …

I will test ASAP too!

Thanks a lot @riban !

:+1: Thank you, will test and report!

Just thought about such an option yesterday as there is a coresponding procedure in raspian.
Haven’t tested the script yet (and currently just test running on my pi5) but from looking at the code:
3 umount /mnt/nvme/*
feels very specific, I’d propose something more generic like:

for i in lsblk -o NAME,MOUNTPOINT|grep nvme|awk {'print $2'}; do umount $i; done

I’d also maybe think about otimizations for SSDs:
https://wiki.debian.org/SSDOptimization

1 Like

Thanks for the suggestions.

This is a one-time process where we first create the mount points, then mount the two NVMe partitions that we just created, write data to them then unmount them. We could specifically umount each partition but it this command does the same thing. I don’t see an advantage to searching for something that we already know!

This is mostly concerned with the fact that some SSD technology has limited write cycles before the disc becomes corrupted. There is a process of disc spreading which avoids rewriting to the same sectors. This basically allocates n-copies (n is often 4) of the space which improves longeivity at the cost of capacity. You used to be able to buy more robust SSD which were 4x the price because they were just the same discs utilising this technique. (I haven’t looked in the past few years to see if this practice is still prevalent.) The main difference

I noticed with this script is that the block size is different but I think it works okay. Indeed the link you provided to optimisation suggests using 4096 which is what parted seems to use by default, at least on my NVMe.

Concerning the optimizations you’re right. My Info seemed to be outdated.

Regarding the umount: (I’m referring to the 4th line umount, not the 47th which is quite ok.) As I understand the script and the line (correct me if I’m wrong) at this position your idea was to umount possible mounts of the device the script will write later on.
At this point in time the script did not do any mounts yet, but it could be possible, that a user might have mounted something before and that could be possibly happened on other mountpoints than later intended by the script.
Maybe a better approach could be to simply check here for a possible nvme mount and exit the script with an appropriate error message (please umount every NVME mounts before running the script) if there is one.

But thanks for the script @riban, great work!

Ah yes! I added that line late, whilst testing because I had manually mounted or had partially run the script so it was to catch those examples. I am not sure it is particularly useful to change it still. We know what a zynthian system looks like and is shouldn’t have these mount points so forcing an umount of these subdirectories still sound wise, if a little blunt!

“…shouldn’t have these mount points…”
Absolutely! But never underestimate the creativity of users… :wink:

2 Likes

Hi all,

In my current harware config it is quite awkward to access/remove the uSD and I would prefer to use usb drives exclusively going forward. Two questions. 1) What problems would arise if one replaced all the nvme references in the script with appropriate usb drive references? 2) If one did not wish to remove the uSD at the end could this still work?
Thanks Harry

Try it and see. :wink: It could/should work but I don’t have a spare USB drive to test. (I blame the kids for stealing all my drives!)

You can set the boot order to 0xf14 which is read right to left:

1: USB
4: SD
f: Retry

This is not an option offered by raspi-config. The closes is raspi-config nonint do_boot_order B2 which tries to boot from NVMe if available, otherwise boot from USB if available, otherwise boot from SD card.

If you want to just try USB then SD you can do this manually by using the command rpi-eeprom-config --edit then setting BOOT_ORDER=0xf14 then saving.

Let us know how you get on. Maybe we could update the script to offer options.

Hi @zynthianers!

I am getting back to the idea of providing my updated V 5.1 with an SSD drive.

I plan to make use of the bespoke @riban’s script for copying Oram directly to nvme, evoking it in /root with:

$ bash zynthian_sd2nvme.sh

I have a few questions:

1] @riban, should I mount the nvme drive in the system beforehand?
I surmise yes, but the suggested sequence of commands as per the Geekpi Hat manual seems a bit redundant, for Zynthian/Oram.
Could you maybe suggest a minimal procedure, just on order to run your script thereafter?

2] As I somehow suspected, I should have installed the nvme hat and drive while upgrading tha case to V 5.1. As it stands now, the PCI Express connector remains tightly secluded in an isle, between the thermal block and the main board. It turns out to be very difficult to reach the flat cable’s lock of the PCIe port, with a plier or a small screwdriver, due to a raised edge of the bottom case. Moreover, sliding the flat cable in the connector in such a crammed space seems to border on the impossible. I am quite reluctant to remove either the main board from the bottom case or the Pi 5 from the thermal block, thus attempting to get hold of the SBC, because either way I would break the thermal tape padding, whose careful placement is one of the least convenient parts of the Zynthian kit building. Any suggestions @jofemodo as how to proceed?

3] Providing that I manage to get beyond the previous step, what solution should I adopt to secure the nvme hat in place? As @jofemodo promptly pointed out, its 2.25 mm width fits almost exactly between the V5 mainboard and the border of the metal enclosure, and a 5x60 mm 16 pin 100 Ohm flat cable from AliExpress could join the hat and the PCIe connector.

I can foresee two ways:
1- Drilling two holes in the bottom case, in order to screw two brass spacers in the related holes of the hat, thus holding the whole package in place.
2 - Using adhesive tape to stick the hat and drive to the bottom of the case.

Solution 1 would be neater, more durable and more reversible, from an assembling standpoint, but would lose the thermal dissipation contact with the enclosure.
Solution 2 would be relatively easier to implement, but a bit makeshift and more hardy modifiable in the future, and would grant full thermal contact with the bottom case.

Any advice on that?

Thanks,

All best regards :slight_smile:

You don’t need to mount NVMe manually. That would be pointless. The script does everything. One of the first things it does is to try to unmount the NVMe in case you have mounted it. The script then partitions the NMVe, creates file systems on the partitions, mounts them and copies the (relevant) content from the SD card to the NVMe. So you should just need to run the script from wherever you want as user root - which is the default user on zynthian.

2 Likes

Thank you @riban!
What you describe is the best OS transfer scenario I could hope for :+1:

All the best

While meanwhile I have all needed parts laying around here for endless Zynthian pleasure, I also have a wife which forbids playing with my christmas present to early…
But I also plan to boot the regular system from NVME (using the GeekPi 10 Hat which comes with the needed flat cable) and inserting the SD for vangelis, testing etc.
For securing the hat I also thought about the 2 versions you mentioned, but currently I’m thinking about simply glueing 2 spacers to the bottom of the case. Will see how durable that works out with superglue and will fallback to your solution #1 if it won’t work.