Our friend @ronsum has published recently a few examples of some nice visuals made with Pure Data:
I have the idea of having visuals on zynthian from a long time ago, but i hadn’t enough time for going forward with it. Anyway, i did some tests, like:
Testing the RBPi’s hardware graphic acceleration. I got the raw C OpenGL demos running at maximum speed, etc.
I also tried to do some tests with chromium, but it really eat too much resources. It would be nice to have an full-accelerated HTML canvas …
So i would like to invite those of you interested in the matter to brainstorm about this subject.
This is my current vision:
Zynthian UI and visuals should run in separated displays. For instance, a SPI display for the UI (it can’t be hardware accelerated) and HDMI for the visuals. If using a RBPi4, we have 2 HDMIs, so we can have the UI and the visuals running on HDMI displays.
Visual patches should be a kind of “visual layers”, similar to current audio layers. Every v-layer have controllers, that can be controlled with MIDI CC, learned, etc. Ideally several patches could be running simultaneously.
Ideally, some kind of graphic framework would be desirable. A high level accelerated canvas : HTML canvas + JS would be wonderful. If not possible or too-heavy-to-run, a lower level framework must be considered. Kivy seems a good option (it’s python!), but other options must be investigated. Also, we coul define a “plugin format” and allow to use any framework. A very simple plugin format would be a directory containing:
An executable file
A yaml text-file with the control interface definition
I was sketching some ideas around this yesterday before I read your post. It may seem obvious but I don’t think you mentioned the need to feed audio into the visuals. Here is an outline I considered:
Zynthian:
loads application / lib, passing the screen size and location to display image (this was based on the idea of having the visual embedded within the Zynthian frame - your idea differs)
accepts connection from visual app / lib for inter-process communication (IPC)
sends encoder messages (up / down / press / exit, etc.) via IPC
routes audio via JACK
Visual app / lib:
connects to Zynthian
registers for IPC messages
connects to JACK
sets its x,y,w,h parameters given by Zynthian (or whole display if we don’t use a window idea)
opens display
clears window within (x,y) - (x+w,y+h)
onAudio
render display (or some other render trigger if not purely JACK audio driven)
onClick
exit (or some other handler, e.g. context menu)
onIPC
change behaviour / exit
These ideas need adaptation to better match your proposal. (I was planning to have the visual within a window within the Zynthian display as another panel / view.) I had also not considered use of GPU which of course is obvious! I wrote a simple framebuffer library for a project which is handy for small projects but is not optimised and does not explicitly use GPU.
I would recommend using a low-level library with a suitable wrapper. I wouldn’t want to have to make the choice between seeing a visual presentation or hearing my Zynthian . It should utilise the GPU extensively to unburden the CPU which is heavily used by Zynthian engines. If this work leads to access to the GPU for Zynthian display (without excessive overhead) then we may benefit from further reduction in CPU overhead.
My colleague has written a cross-platform professional audio monitoring unit that runs on the Raspberry Pi. A prime purpose of this unit is to display audio level, frequency response, etc. for audio quality assessment. This includes a variety of meter displays as plugins. The project is open source as are the plugins. I looked at the scope plugin as a starting point which is actually fairly simple. That project uses wxWidgets which is a high level graphical toolkit which would probably be too bloated for our purpose and I am unsure of how effectively it uses the GPU. I believe the toolkit supports OpenGL but not sure PAM uses it. (I can find out.)
So, to summarise (or as my kids say, tldr; - which should probably be at the top of this post): I suggest a plugin host with IPC to Zynthian and a template for plugins that can access JACK and GPU.
Regarding kivy… it looks quite nice. It is (yet another) framework and I have read that it uses pyGame as its render engine (2015 Redit post - I am not sure if that is still the case) and that it can be slow to start. Those comments make me wonder how bloated it is but it is worth a look. I would suggest that a test case wireframe be created in each contending framework and metrics measured to review which has least impact on core Zynthian functionality. If you decide to use kivy, it may prove a better toolkit for the main Zynthian app (than Tkinter). But before anything, test, test, test…
Update: It looks like kivy migrated away from pyGame dependency and now uses SDL2 directly.
I have tried to build something useful with kivy 2 years ago. I also bought a book, but my personal impression is that kivy is not well documented. The examples in the book where outdated and I havn’t found any hints how to fix simple problems in my code, so I switched to TkInter.
I see there is already some really great replies to this thread. For further inspiration and ideas, check out some of the features on the Critter & Guitari ETC and thier new Raspberry Pi based kickstarter EYESY https://www.kickstarter.com/projects/critterandguitari/eyesy-video-synthesizer
Their ‘Modes’ are written in Python and are open source. Take a look on PatchStorage in the ETC catagory. https://patchstorage.com/platform/etc/
I think a key feature for the Zynthian visuals would be compatibility with these Python written patches.
But for now, I will return to my work on the soon to be posted PD/GEM toy ‘Zynth_A_Sketch’ patch.