I’m wondering if Zynthian has any HTTP API we could use for making interesting stuff, .
I’d like to build a “simpler” Zynthian box, for my e-drums kit, and I was thinking if writing a mobile app to control Zynthian could be an option. That would provide a lighter and cheaper box.
Any comments?
My question here is if it would be “easy” to do it using the current code or if we should rewrite lots of code to add a new entry point (current UI using encoders and screen is the only entry point, as far as I know).
mod-ui (which I think Zynthian has) has an web-service;
mod-host has TCP support
However, I believe that all these points are not documented
–
My personal project, Pedal Pi, was thought to be easily extended. It already has a web service (http://pedalpi.github.io/WebService/), however the scope of the Pedal Pi only involves LV2 audio plugins.
Because Zynthian has multiple engines, providing a consistent API for everyone is a big challenge.
The point I can contribute to the conversation is the Pedal Pi architecture. Throughout the development, I’ve done a number of refactorings so it can be extensible the way it is today. Maybe Zynthian people can look at him and get inspired.
One challenge I have is to rearrange the mod-ui so that it can be another LV2 plug-in control interface (as opposed to unique).
I think we could start a “POC” with some basic functionality, just to explore if this is a valid idea or not.
I don’t have any technical knowledge (just basic user level) about Zynthian, but I’m a software developer, so I can be useful with some guidance.
My idea was to start adding a basic layer and see what happens (in my box). I’m not talking about web-ui, but an API. Of course, if Zynthian provides an API, any client could connect to it: Web-ui, mobile apps, …
As Zynthian supports multiple engines, we could create a resource for every engine, with its own contract. (it’s just an idea, I don’t know which is the best approach).
I don’t have lots of time (I have a 12 months old baby), but I can do my best to collaborate with this project.
You guys have a better knowledge and you can tell me if this could be done easily or not (at least, a first iteration).
My first step is trying to learn how Zynthian works (internally), just to figure out which might be the best approach.
I’ve put together a few API’s in various day jobs and if I’ve drawn one conclusion from the process.People can tend to become rather over enthused by the technology whilst possibly loosing sight of the requirement.
Yes it is possible to completely represent the entire functionality of a device over one or a series of end-points and by careful use of RESTful interfaces and MUCH (much, much) security provide an experience that represents the actions of the device, but as with (nearly) all computing you are really only modelling a series of human / mechanical interactions and often the interface looses much subtlety that the original device had.
One could, for example, produce an API to control a Mini Moog but I suspect that for a lot of proficient users it might provide nothing more than a clever way of doing a remote control panel, or a patch library and then how would you deal with the devices control panel when each new patch loads? (hook, current, or replace all the pots with encoders and leds…?) Would it still be the same device after the API had taken over?
The base interaction level of the zynthian is the XWindows based control interface which is both simplistic ( in a good way ). Certainly it’s got elements above this ( MOD-UI springs to mind) but at the base level we have layers, engines, parameters, snapshots and some house keeping.
You would certainly want to present these, but I would suggest an approach I’ve had some success with.
Using Accessibility techniques.
Trying to define the simplest level of interaction you can provide that will allow you to control the device.
If we can represent the simple 4 knobs and short, medium and long pushes with simple commands ( I’ve always said MIDI ) then the entire interface could be driven by a VERY simple interface that we could operate across continents if we felt so inclined.
That way we have a mechanism that sets the GUI as the base from which the API is constructed and we maintain our user Interface as a controlling device that could be built upon, rather than risking the dangers of the API dragging the original function of the device in directions that might not really address the new desire and damage what has already been done to provide a tightly defined base functionality.
Yes I put the Select & Back presses under the thumbs on E & C as I suspected those would be the keys people would probably use most, If you actually did drive the Interface from a MIDI keyboard.
Perhaps some ghastly obscure dissonant jazz chord could enable and disable the mechanism
I am not missing an API but a generic layer in the architecture.
Where you can map CC with Knobs. Which calls a callback function to a WebSocket or ServerSideEvent when something is called. Not every midi action, but possibly any.
I think that designing and implementing a Zynthian OSC API is the way to go. OSC is perfectly suited for that task and is almost self-explained. It can be used locally or remotely and it’s really easy to use.
After that, we can create wrappers for everything we want:
Are you sure? OSC is both directions?
Assuming a programme change is sent via midi, or a snapshot is changed or a cc value in an engine? I don’t think that everything can be done with OSC.
We need a wrapping layer that maps midi, zyncoder and osc. Some publish/subscriber mechanism. And we need an extended architecture so that we can add more physical knobs, switches and LEDs.
Only one direction. You can change the behaviour from the outside.
But you don’t have callbacks.
If somebody else (the zyncoder) is changing something, the OSC client wont get the result unless it is polling.
This would be so much last century
But i am just saying, that OSC is not enough, only the REST interface.
But nowadays you need something reactive as well.
Of course you have it. OSC is an asynchronous beast. You have a “send port” and a “receive port”.
Using something like “liblo” is very easy to listen for every “callback path” you desire. I’m doing it all the time in the ZynAddSubFX engine Take a look:
You will find a lot of methods starting with “cb_osc” (cb=callback).
I’ve thought a lot about this subject and i haven’t found a better tool for the task than OSC. Anyway, i’m open to other proposals that fit the specifications:
Remote and local access
Lightweight
Asynchronous. It should be suitable for RT interaction.
Easy to use and hack. As self-explained as possible.
Some degree of standardisation is a point
And of course: reliable, stable, well tested technology, etc…
ok great.
So my task will be to get tornado work with RxPY and OSC
How do you plan to implenent it? Especially the callback?
Isn’t it the same in the end? A reactive Tornado doing REST and ServerSideEvents?
And we are calling it OSC in the end?