LV2 synth: how to limit CPU usage

in Zynthian some synth plugins (Surge, Vitalium…) cause noises/crash because are too heavy

how is it possible to limit the CPU use for example reducing polyphony? is there some MIDI plugin?

It might be possible to lower the sampling rate. I imagine whether this is possible depends on the synth plugin or engine you’re using.

I know that a reduced sampling rate is used with PianoTeq to let it run successfully. You can find some forum threads that talk about it by searching on “pianoteq sampling rate”.

is it possible for Surge? How?

Each synth is different. Some of those you mention can be quite resource hungry. It can depend on the features of the synth used, e.g. some presets use high power features. You need to balance the synths and the presets used with the available CPU. I tend to avoid these because it isn’t necessarily obvious what presets are hungrily than others and most of the advantages of these synths tend to use these features.

Note that some synths use all their resources all the time, e.g. setBfree which allows you to know how it always performs. Others (most engines) ramp up their resources use on demand, e.g. when keys are pressed.


I understand: in general, is there a possibility to limit the polyphony with a MIDI filter?
I know that even if you send 4 notes only but you keep sustain pedal pressed the load for CPU is high …

Hi @piattica :slightly_smiling_face:

Of course, even if one just plays - let us say - very “ordinary” diatonic four-part chords on a synthesiser, but keeping the sustain pedal pressed during the transitions, the voice count can quickly ramp up out of control, at the very least doubling the polyphonic requirement to the double of keys pressed, that is eight voices, which for a resource hog algorithm is already a CPU-taxing task, on a single chain/channel.

If you are experiencing such kind of performance bottleneck, I would advise - depending on your Zynthian hardware - to resort to solid but relatively simpler/thinner engines like OBxd, masking their subjective lack of sonic richness with code-effective sound FX, of which there is certainly no shortage in the Zynth plugins repository.

1 Like

I really like the phrase [quote=“Aethermind, post:6, topic:9116”]
very “ordinary” diatonic four-part chords

And I’d like to consider, instead of “LV2 synth: how to limit CPU usage”, how can we increase the available CPU resources? Obviously the RPi5 is one (future) answer.

Does anybody know of studies or ‘experiments’ to see how many voices are needed at the peak for specific pieces of music with a given orchestration?

1 Like

I refer the honourable gentleman to the Jump competition that took place here some time ago . . .

1 Like

Hi @tunagenes ,

Well, this voice count is not so hard to figure out.

We can consider that a large late-romantic orchestra, including fully implemented strings, woodwinds and brass sections with divisi, plus a 16-voice choir, can easily reach 60 simultaneous parts in the climaxing points.

To simulate this amount of polyphony convincingly, in the context of a virtual orchestration template, you need at least four voices for each line, in order to preserve a plausible sense of sonic presence, without cutting the release trails of the samples.

Thus, you approximately need 240 synthesised voices, and I think that it is not coincidental (beyond the obvious multiple of 8 bits) that Kontakt and similar sampling platforms usually run up to 256 parts, providing that the hosting computer has enough stamina, to manage a 60-channel polytimbral sampling set of diverse instruments.

1 Like

Thanks for the info-insight.

1 Like

I too have noticed that Surge tends to crash a lot. I can understand noise and glitches in the resulting audio being the result of a plugin being too CPU hungry, but I would have thought that any crashes are due to something else?

I’ve been whining for “Subzynthian” functionality for ages, as recently as earlier this week. I would like to be able to put two, three, four Pis with sound devices into a pretty box and have my main V5 be able to assign them chains and send performance data along.

Better and faster SoCs, or else multiple devices with central control, are the two ways to pack more engines into a single Zynthian performance. There are many better and faster SoCs, but none of them have the community support of the Pi, so that’s kind of a non-starter without huge budgets and actual systems programmers, and that’s why we end up with devices like the Montage we’ve been looking at elsewhere.

But as long as a single Pi is capable of running a single engine, it is a perfectly affordable choice to simply add a second headless Pi into any musician’s stage rig; I recall another thread where I read that the Pi4 actually can run Pianoteq at full internal sample rate, but that is about the limit, so in order to maintain versatility, they limited the sample rate for Pianoteq.

As it happens I own a second Pi4 and several Hifiberry devices; I would love to make that my dedicated pianoteq box, but keep it integrated into my Zynthian workflow. My other option is to run it on my laptop, and that’s just a whole-ass nightmare of extra wires and USB danglies.

QMIDINET can do some rather impressive things with this. You used to allocate the 16 MIDI channels around the farm. Haven’t really thought about what the new implications of breaking the 16 chains accross the estate might allow.

I just enabled QMidiNet on a couple of machines to better understand how it works. In Zynthian, when QMidiNet is enabled, a single network MIDI port is created. This appears on the network as a common / merged point, i.e. any MIDI sent from one Zynthian will appear at the input of all others with QMidiNet enabled. The transport is transparent so you need to ensure you are sending and receiving on approrpriate channels.

At the receive end you enable QMidiNet and set the MIDI Device to the right mode, e.g. ACTI if you want to use stage mode, MULTI if you want multitimbral mode, etc. It may be easier to get it working by initially setting to MULTI so that you know which chain on the receive machine is driven by which MIDI channel. (Indeed you may feel this is the most appropriate mode for QMidiNet if you are simply slaving a device that you will not physically interact with much.)

Unfortunately I could not make QMidiNet work at the send end with the current stable release. It works in the development (chain_manager) branch.

[Edit] Also worth noting that I got a sick note almost immediately. This is a “send and pray” delivery mechanism and even over my lightly loaded, wired LAN I suffered packet loss.

1 Like

I’ve always used it on wired networks often point to point with a separate USB ether net connection going off to the outside world rather than sharing on a switch or hub. It certainly got some heavy abuse, in the early days to get it reliable, and became fit and forget once it shed the occasional session of MIDI howl around.

I did do some testing at thetime with fast sequences that seemed reliable and repeatable. Sadly I haven’t given it too much miss treatment recently, and even then restricting myself to mostly test rigs over actively musical use.