I wanted to put together a basic voice sound with with multiple parameter inputs, to demo a hand gesture based control using a tiny 3D camera. I really just have a couple of Pi 3 A+ boards around with 1 USB, No Ethernet and 1/2 the ram. I don’t need mane simultaneous voices.
These can then be mapped via the MIDI LEarn facilities where you would select the MIDI Learn function ( L/S Encoder short press), alter the zynthian control to tell it which one you want to allocate, and then operate the MIDI source device. It should map.
You shouldn’t require the browser to do that, but it’s all a bit read only from the point of view.
You need the browser to chop and change units.
The browser window takes a very long time to appear, but be patient.
Does it not work outside only within a MOD-UI chain or is this an overall failing?
Testing is probably the branch most likely to represent current implementation.
It would be useful to submit a bug report to describe particular problems.