Web app to extend the Zynthian UI

Hi all. I’m interested in developing a web app for Zynthian which let a browser window act as a live, extended second screen. Initially, this would have an extended amount of dials and sliders for the current instruments, letting you map midi to different controls like on the zynth. It would also let you change and store presets.

I’m a web developer, with a lot of experience with UI, UX and web apps. I’m quite interested in trying to prototype a responsive PWA using my current favourite frameworks: Vue.js and Tailwind CSS

It could be built up as a library of reusable UI components (a dial, a slider, a select list etc) so that it could be extended to cover other functionality. This would of course require an API for it to actually work and I’m not really a Python dev :grinning:

Are there any threads I should read?
Anyone working already on something like this?
Anyone talked about decoupling the UI?

I have a laptop with a touch screen so this is maybe a bit more appealing to me than some of you :slight_smile:

3 Likes

We lack a complete and well structured API for that, but you could use a combination of:

  • MIDI CC (using the MIDI learning feature)

  • CUIA (via MIDI note-on events or OSC).

http://wiki.zynthian.org/index.php/Zynthian_UI_Users_Guide#CUIA:_Callable_UI_Actions

Regards,

Thanks, I’ll look into this.

We were discussing about this in the past in this forum.
I planned to use the existing websocket connection of the webconf for this.
I would have started if we had a better osc API

2 Likes

Thanks for the extra info. I’ll investigate and see if I can create a primitive REST API, so that I can start messing around with my idea of a JS web app controller.

If you don’t hear anything from me again it’s either taking me too long or I’ve given up :wink:

I think, that it doesn’t need an extra REST API.
For the typical REST communication we should use OSC calls (which might be REST).
And if I understood correctly, the OSC Server should be able to get the other direction done as well.
I am not sure, if this means that it communicates via websocket.
This OSC server on the zynthian is the key…

I think, displaying the currently loaded snapshot in the webconf would be a first simple usecase. And when you change it it in the ui, the webconf changes without a refresh.

2 Likes

Osc in a browser. . .

Seems like a sensible starting point.
If we got status across as well we would have the starting point of pretty much everything…

And a remote browser based zynthian mixer panel would sit rather well on a tablet . :slight_smile: . …

Excetra, exectra. . .

How much of the zynthian is amenable to OSC? Could we implement the two components described here with relative ease and how would we configure the receiving end.? It would seem the osc is probably the best standard to alight on, so is this another exercise in definition and mapping first?
Presumably we need a a protocol at the gui level, but where exactly lies the control plane? Do we send LOAD Snapshot or Enc LS Long Press? or allow both?

The order we allow MIDI msg , CUIA OSC has some prioritising to consider, unless someone has already been thinking that way :slight_smile:

Interesting comment from the author of Liblo, which Zynthian uses for OSC:

Already liblo supports a way to access serialized/deserialized data, perhaps a standard method to provide a transport “glue” might help (eg a callback that allows to pass stream data on to another library), along with some examples of usage with an existing HTTP/websockets library, would be a more realistic approach.
Link: https://github.com/radarsat1/liblo/pull/75#issuecomment-481684411

I think we would need exactly this, something to attach the active OSC server to ws://zynthian.local:443 for any web app could connect to. We’d have to generate an SSL certificate (using certbot if we’re using Apache, or something similar), and work out a way for user authentication*. I gather adding security to the OS is on the to-do list already.

*JWT token on the websocket? Something like https://gist.github.com/jfromaniello/8418116

Now I don’t know exactly how much we could do over this, or whether we’d need to connect midi control to this for something like a mixer panel to work in a browser (this is exactly the kind of thing I was thinking of @wyleu :+1:)

Apologies if I’m not understanding things correctly. I need to keep digging around in the code and reading up on the technologies already in place.

Keep I mind that webconf opens a ws already and that a webapp can’t have more than one open to a single host.
Hence we need to use that one.

The Browser websocket stuff . …

The zynthian end … . .

We would want to provide recognised external channels. Qwerty ,MIDI, CUIA mingled into the existing stream. Could we expand one of these channels or do we want to establish the existing codebase as a no ‘type’ ? Would we want to establish GUI like commands (Long/short/Presses etc) into the mix.

It starts to look like several busy event loops … :slight_smile:
Can we establish multiple deferred methods to handle the api? Unless we are

What “feedback” functions do you think should be implemented?

For instance:

  • Get current layer
  • Get layer list
  • Get layer info
  • Get bank list
  • Get preset list
  • Get layer controllers
  • etc.

JSON would be a good format for returning the info, although other formats could be OK too.

Regards,

To avoid doing everything twice it would seem sensible to provide a similar experience to the existing gui within the webconf. Obviously we would want the ability to select particular areas of functionality (samples available for instance) without the overhead of a complete status dump on every event in the event loop but if we can possibly avoid gui says 4 layers and webconf says three then that should be our overriding concern. Nothing unsettles more than that kind of problem in this sort of stuff especially whe on stage when four bars from a solo :grin: