Is there any Zynthian API?

Hi all,

I’m wondering if Zynthian has any HTTP API we could use for making interesting stuff, :wink:.

I’d like to build a “simpler” Zynthian box, for my e-drums kit, and I was thinking if writing a mobile app to control Zynthian could be an option. That would provide a lighter and cheaper box.

Any comments?

My question here is if it would be “easy” to do it using the current code or if we should rewrite lots of code to add a new entry point (current UI using encoders and screen is the only entry point, as far as I know).


There is nothing yet although we are aware that we need something. A kind of web-ui is on our mind.

We should make it reactive.

A “Zynthian OSC API” is in the TODO list from some time ago. Eventually it should be done some day, although currently it’s not high priority.

I think Zynthian:

  • has some support for MIDI;
  • mod-ui (which I think Zynthian has) has an web-service;
  • mod-host has TCP support

However, I believe that all these points are not documented

My personal project, Pedal Pi, was thought to be easily extended. It already has a web service (, however the scope of the Pedal Pi only involves LV2 audio plugins.

Because Zynthian has multiple engines, providing a consistent API for everyone is a big challenge.

The point I can contribute to the conversation is the Pedal Pi architecture. Throughout the development, I’ve done a number of refactorings so it can be extensible the way it is today. Maybe Zynthian people can look at him and get inspired.

One challenge I have is to rearrange the mod-ui so that it can be another LV2 plug-in control interface (as opposed to unique).

1 Like

Sorry - I dropped this due to overload problems and so much different ideas running around in my head… braindead :wink:

1 Like

I think we could start a “POC” with some basic functionality, just to explore if this is a valid idea or not.

I don’t have any technical knowledge (just basic user level) about Zynthian, but I’m a software developer, so I can be useful with some guidance.

My idea was to start adding a basic layer and see what happens (in my box). I’m not talking about web-ui, but an API. Of course, if Zynthian provides an API, any client could connect to it: Web-ui, mobile apps, …
As Zynthian supports multiple engines, we could create a resource for every engine, with its own contract. (it’s just an idea, I don’t know which is the best approach).
I don’t have lots of time (I have a 12 months old baby), but I can do my best to collaborate with this project.

You guys have a better knowledge and you can tell me if this could be done easily or not (at least, a first iteration).
My first step is trying to learn how Zynthian works (internally), just to figure out which might be the best approach.

1 Like

I’ve put together a few API’s in various day jobs and if I’ve drawn one conclusion from the process.People can tend to become rather over enthused by the technology whilst possibly loosing sight of the requirement.
Yes it is possible to completely represent the entire functionality of a device over one or a series of end-points and by careful use of RESTful interfaces and MUCH (much, much) security provide an experience that represents the actions of the device, but as with (nearly) all computing you are really only modelling a series of human / mechanical interactions and often the interface looses much subtlety that the original device had.

One could, for example, produce an API to control a Mini Moog but I suspect that for a lot of proficient users it might provide nothing more than a clever way of doing a remote control panel, or a patch library and then how would you deal with the devices control panel when each new patch loads? (hook, current, or replace all the pots with encoders and leds…?) Would it still be the same device after the API had taken over?

The base interaction level of the zynthian is the XWindows based control interface which is both simplistic ( in a good way :smiley: ). Certainly it’s got elements above this ( MOD-UI springs to mind) but at the base level we have layers, engines, parameters, snapshots and some house keeping.
You would certainly want to present these, but I would suggest an approach I’ve had some success with.

Using Accessibility techniques.

Trying to define the simplest level of interaction you can provide that will allow you to control the device.

Some related discussion.

If we can represent the simple 4 knobs and short, medium and long pushes with simple commands ( I’ve always said MIDI :smiley: ) then the entire interface could be driven by a VERY simple interface that we could operate across continents if we felt so inclined.

That way we have a mechanism that sets the GUI as the base from which the API is constructed and we maintain our user Interface as a controlling device that could be built upon, rather than risking the dangers of the API dragging the original function of the device in directions that might not really address the new desire and damage what has already been done to provide a tightly defined base functionality.

I would take the 4 groups:
3 black
2 black and c
3 white beneath c
3 white above c

Then you can have it on every octave.

Yes I put the Select & Back presses under the thumbs on E & C as I suspected those would be the keys people would probably use most, If you actually did drive the Interface from a MIDI keyboard.
Perhaps some ghastly obscure dissonant jazz chord could enable and disable the mechanism :smiley:

I am not missing an API but a generic layer in the architecture.
Where you can map CC with Knobs. Which calls a callback function to a WebSocket or ServerSideEvent when something is called. Not every midi action, but possibly any.

Yes that’s it. A single choke point :smiley:

Hi guys!

I think that designing and implementing a Zynthian OSC API is the way to go. OSC is perfectly suited for that task and is almost self-explained. It can be used locally or remotely and it’s really easy to use.

After that, we can create wrappers for everything we want:

  • MIDI
  • Web UI
  • CLI
  • etc.

Kind Regards,

Are you sure? OSC is both directions?
Assuming a programme change is sent via midi, or a snapshot is changed or a cc value in an engine? I don’t think that everything can be done with OSC.
We need a wrapping layer that maps midi, zyncoder and osc. Some publish/subscriber mechanism. And we need an extended architecture so that we can add more physical knobs, switches and LEDs.

This could be an example of how the OSC API should look:

/layer/new i s s s

Create New Layer with parameters:

  • i => MIDI channel
  • s => engine
  • s => bank
  • s => preset

Return Success:

/layer/new/success i

  • i => index of new layer

or Error:

/layer/new/error s

  • s => error message


Request layer list.

Returns a sequence of:

/layer/#index#/info i s s s

  • i => MIDI channel
  • s => engine
  • s => bank
  • s => preset


Request info for a layer.


/layer/#index#/info i s s s

  • i => MIDI channel
  • s => engine
  • s => bank
  • s => preset

/layer/#index#/chan i

/layer/#index#/bank s

/layer/#index#/preset s

/layer/#index#/route s

/layer/#index#/route s

/layer/#index#/transpose s

Set layer parameters (chan, bank, preset, audio routing, transposing, etc.)

Return Success:


or Error:

/layer/#index#/#param#/error s

  • s => error message

/layer/#index#/delete i

Delete a layer.

Return Success:


or Error:

/layer/#index#/delete/error s

  • s => error message


Delete all layers.

Return Success:


or Error:

/layer/*/delete/error s

  • s => error message

And so on with the rest:

  • Snapshots: loading, saving, renaming, …
  • Controllers: get, set parameter list, info and values
  • Special Features: Audio Recording, MIDI profiles, MIDI rules, etc.

Kind Regards,

Yes :wink: IMHO the public API of Zynthian should be OSC.
There are other successful projects that use this approach. For instance:


Only one direction. You can change the behaviour from the outside.
But you don’t have callbacks.
If somebody else (the zyncoder) is changing something, the OSC client wont get the result unless it is polling.
This would be so much last century :slight_smile:

But i am just saying, that OSC is not enough, only the REST interface.
But nowadays you need something reactive as well.

TouchOSC for android works in both directions. Is this the same OSC it’s derived from?

when i mean reactive, it’s this

TouchOSC might poll.

Of course you have it. OSC is an asynchronous beast. You have a “send port” and a “receive port”.
Using something like “liblo” is very easy to listen for every “callback path” you desire. I’m doing it all the time in the ZynAddSubFX engine :wink: Take a look:

You will find a lot of methods starting with “cb_osc” (cb=callback).

I’ve thought a lot about this subject and i haven’t found a better tool for the task than OSC. Anyway, i’m open to other proposals that fit the specifications:

  • Remote and local access
  • Lightweight
  • Asynchronous. It should be suitable for RT interaction.
  • Easy to use and hack. As self-explained as possible.
  • Some degree of standardisation is a point :wink:
  • And of course: reliable, stable, well tested technology, etc…


1 Like

ok great.
So my task will be to get tornado work with RxPY and OSC :slight_smile:

How do you plan to implenent it? Especially the callback?
Isn’t it the same in the end? A reactive Tornado doing REST and ServerSideEvents?
And we are calling it OSC in the end?