I started a new thread to keep the topic clean.

Just a fixed idea…

I’m not sure, but perhaps you can use I2S to connect several raspis in series and add up the audio data until a sound card with a DA converter sits at the last raspi and outputs the data? If you also have a parallel data connection (serial or better SPI) between the raspi’s, you could have a master that receives the MIDI data, distributes it to the raspi’s and outputs the added audio data via a sound card.

My ooooooold Roland-D110 had the feature, as soon as it ran out of voices, it forwarded the data via MIDI to a second D110… you could do the same in a ZynCluster.

Regards, Holger


Yes, I think having the layers distributed and passing the audio signal is the key.
And maybe using the WiFi to create a Midi over IP network .
Would be cool once we have a sequencer that we can connect to one cluster and add to the same loop.


I’ve been keen to do something similar for a while. I like the idea of a audioless controller zynth with the interface and MIDI in/out that could be located on stage and a remote bank of zynths that generate the actual sound. OSC would seem to be the ‘proper’ way to go bout doing it as this.
Latency for any audio combination would presumably be an issue as would the audio routing which presumably would get bewilderingly complicated to configure if not to actually do but simply mixing the analogue would suffice for most situations I suspect.

I think this was mooted way back, when we talked about ( or I did and I assumed nobody listened) using different colour s for the different channels.
That would probably be the first design decision does the user have to understand where and which zynth is doing which layer or do we leave them in happy ignorance and just let the cluster sort it out for its self.
I suspect the techie community would prefer the first and the musical community the second . . .


:star_struck: Great to see how this idea develops!


OB-Xd with 1024 note polyphony? Entire orchestras of linux-samplers? The ability to play chopsticks using a physically modelled recreation of the sound of the Big Bang?

Pianoteq Instruments

OK! I’m not going so far yet … :blush:

I’m talking of having your set of layers distributed in several zynthians (slaves) and controlling everything from a single master. Every layer, with all its chain effect, would be processed in the same zynthian, that would output the audio to a mixer board, where the sound of all zynthians would be mixed (if you want to do so …).

Digital audio routing between zynthians could be studied anyway. It could be done using NetJack:


Using this, we could have a “real” Zynthian Cluster, with a bunch of cpu-only nodes, etc. :wink:



Hi everybody,

Would it be raspberry zero a good candidate for a slave Zynthian?

Best regards.


Hi @smespresati,

I always tend to use as much RAM and CPU power as possible. So I don’t think the Zero would be my first choice. But in principle a cluster should also consider the Zero.

A general problem results from this:
Is it possible to use different powerful hardware in the cluster?
How do you determine when which node in the cluster is overloaded (e.g. if you do a load distribution of the same engine with the same sound on two nodes and one of them generates dropouts, you can’t determine which node is overloaded without additional monitoring)?

Regards, Holger


I very much like the idea of using a zero for this, as it makes the whole control mechanism light enough to be powered from a battery pack :smiley: No mains on stage !! just Ethernet out !

Low (very low) priority obviously but nice nevertheless…


We could think in having 16 channel per node …


Another issue is going to be proper identing of snapshots et al… We might have to start giving patches individual files individual hashes and provide a content management system to keep it all in line… Simply relying on file names ain’t going to hack it when you might well have distributed versions of files with different MIDI settings etc



Some task is already done :wink: