Neural Amp Modeler and LV2

Since Neural Amp Modeler(NAM) now has an LV2 plugin, would it be possible to use it on a Zynthian?

I’ve tested NAM (non LV2) on a windows machine with Reaper with great results.

A bit off topic, here but…
I am planning on experimenting on an Orange Pi 5B, both with NAM and Zynthian separately.
I have one one order but it’s quite a few weeks out from delivery. I’ll report back here once I’ve run some good tests.

Raspberry Pi’s were becoming so expensive and difficult to obtain, that I thought it would be worthwhile to explore a similar platform.

4 Likes

I tried to build the plugin on the raspberry pi 3 but without success =(

1 Like

We should ask how he managed to build it for RPi4, and if it is possible to get the same for RPi3

1 Like

I received my OrangePi this past week, and I installed and optimised version of Ubuntu 22.04 directly to the onboard storage. Armbian worked fine for the sd card, but I could not find a method to successfully install it to the onboard eMMC/NAND storage.

I used this method:

I’m following his instruction here - but naturally in the land of Linux, there were a number of pieces missing.

I found there was a lot from the LV2 repository on github that was not installed. When I tried to compile mod-host, I got a lot of “no such file or directory”. I solved that by literally copying the contents of the lv2/lv2plug.in directory to my lv2 folder.

At least I got a new error - progress? ha

This time it gave errors with the “LV2_STATE__freePath”. I saw notes from the coder that this was for compatibility with older lv2 headers, so I attempted to comment it out - still no luck, and just new errors. So I put the code back the way I found it.

I did successfully compile the LV2 Neural Amp Modeler with these instructions:

I got a number of warnings, but no failure. I’m going to see if this will work with Carla (which I was able to install).

I’ve ordered a usb audio breakout device to help with guitar testing. The Orange Pi does not yet have the equivalent of a HiFi Berry, and the onboard combination jack did not work with a guitar input signal. I even built a TRRS plug and breakout box to use standard mono guitar cables. The audio out worked, but not the audio in. Once the device arrives and I take it for a spin, I’ll report back with the results.

Very curious to see where this goes. I’ve been playing with the Win11 version of NAM and I must say the quality is really up there, but ultimately I would like to have this running on a SBC.

I did buy the Hotone JOGG interface/pedal, which is what the guy doing the LV2 implementation of NAM is using, to be ready once it’s a bit more mature. He is also quite active on the main NAM development.

I probably wouldn’t try it on anything with less performance than a RPi4. NAM needs more processing power than similar products like ToneX.

Cheers

1 Like

Following the build instructions on github was simple. It built without error (just some warnings) and created a LV2 plugin but fails to run.

lilv_lib_open(): error: Failed to open library /usr/lib/lv2/neural_amp_modeler.lv2/neural_amp_modeler.so (/usr/lib/lv2/neural_amp_modeler.lv2/neural_amp_modeler.so: undefined symbol: _ZNSt10filesystem6statusERKNS_7__cxx114pathE)

I have reported upstream as bug report.

I got exactly the same error. I suspect it could be the LV2 version, but not sure until having updated LV2 to the last.

Regards,

I think it relates to the version of compiler. We have gcc 8.3.0 which requires the -lstdc++fs flag for std::filesystem support. I haven’t yet managed to make this flag have any affect on the compile / link.

[Edit] I note that the LV2 only presents input and output level as controls that Zynthian recognises. It exposes these two control ports and I think expects all other control to be done via some other mechanism that jalv / Zynthian does not support. So there may be limited functionality even if we get it to compile!

[Edit] It compiles and loads on Bullseye runnint aarch64 version of Linux kernel with gcc 10.2.1 and jalv.gtk3 allows loading of a Neural Model so we may have to wait until we migrate to Bullseye.

Loading a model from GitHub - pelennor2170/NAM_models: A repository collecting model files for Neural Amp Modeler (NAM) all in one place gives a warning: Unknown object type? but seems to load. (I haven’t tested audio yet.) It ramps up one core to 40% which is high but could be acceptable in a snapshot. (This was always going to be heavy load.)

Update: I tested by plugging my guitar and headphones directly to a cheap USB audio adapter and it works! (This is still on the RPi400 running Buster, not Zynthian.) Most of the models are for distorted amp simulations but there are some clean ones in there. A couple of very quick 10s recordings:


[Edit] However, presets are supported so we could have a system that loads presets using jalv’s native preset system. All academic until we can get it to run on Zynthian of course…

4 Likes

I think this is the main use-case for Neural Amp Modelers :grin:
Distortion , due to its “non-linear” nature is quite difficult to emulate using more conventional algorithms. I think there was some discussion about this subject some time ago:

Regards,

1 Like

Indeed it did manage to introduce feedback after hold a note for a while on some of the models which was quite impressive (considering I was wearing headphones). I need to try it with an amp to see how that behaviour alters with real feedback (howl-round).

1 Like

I build Smart Amp and installed in on the overclocked Zynthian and it continually xruns, using 100% of a core. It also exposes none of its controls to jalv. So it looks like that implementation is not much use for us.

1 Like

Ask and you may receive! The build issue I reported has been resolved and I built NAM as an LV2 on Zynthian - Yay! Next to load a model and see how it performs…

[Edit] And the answer… not so good! It does apply the model and sound gets through (so much better than the Smart Amp / Pedal which just made odd sound) but it has very regular xruns (every few seconds) and use almost 100% of one core. So I don’t think this is going to be useful to us at the moment.

4 Likes

Hopefully adding multithreading would spread the load across the other cores and then it might become usable.

Did you try installing on its own to see if the RPi can handle it?

Cheers

It runs much better on aarch64 but barely on 32-bit, even with Zynthian stopped. The upstream author suggests trying some lighter models which I will do when I get some time but it is likely to be on the edge of xruns so fragile at best. This is yet another encouragement to migrate to Bullseye and 64-bit. Now where did we put all that spare time???

Let’s migrate. After releasing next stable-2305, I will migrate to 64 while you prepare the chainrefact merge :ok_hand:

4 Likes

I tried some lighter models from here and they do work better. I am able to run these models without xruns and one core sitting approx. 40%. This makes this far more useful. I saw that it did not previously support samplerate other than 48000. I think it does work at other samplerates now but it looks like it is more CPU efficient at 48000. This still won’t be useful in the current version of Zynthian because it will be launched without the GUI (due to it not having a custom UI) and hence there isn’t a way to select the model. We could change this in the future. For my tests I tweaked the source code to show the default GUI which only works if VNC is enabled so we may add a mechanism to load models, maybe via the preset system. Don’t hold your breath as we are busy with other stuff.

3 Likes

I think they are still busy adding features and hopefully soon they will focus more on performance.
See this thread there: [FEATURE] Support Clang compilation · Issue #222 · sdatkinson/NeuralAmpModelerPlugin · GitHub

Cheers

That optimisation looks to be on the upstream project and not the LV2 project but maybe the LV2 project may benefit from such optimisation too.

I have written a script to convert a folder of models to Zynthian presets which is a step closer to integrating this.

2 Likes

Have you seen mod? MOD Audio · GitHub
It’s similar to NAM in the sense that it uses AI, but it uses a lighter algorithm that would be better suited for RPi. This might be a much better fit at least until there’s a RPi5.

I believe they also have the intention to develop the functionality to consume ToneHub profiles, which is probably one of NAM’s strongest points.

Cheers

1 Like

I can’t see where it says it uses a lighter algorithm but I trust you have that on good authority. I can’t spend time figuring out how to build this plugin. It gives no info and is tightly integrated with mod which has its own build system. If someone wants to document how to do this then we could try it…