In the confined world of today, I would really love to have something like Jamulus (https://jamulus.io) on my Zynthian.
So far, I run it by hand on my little box. I have an instrument plugged in the audio in. Jamulus converts that input into little UDP packets, sends them 200 km away to a server in a data center, gets them back (mixed with others when others join the same server), and finally sends all this to my amp plugged into the audio out. Guess what? I can hear myself play!
The latency is between 40~50 ms. At least thatās what Jamulus says. This is orders of magnitudes larger than what the speed of light back and forth across 200 km should give me. But I am not complaining too much. If I had to play with my band mates, all confined within a 5 km radius from me, it would take sound in air 30 seconds to travel back and forth between us (if we play loud enough).
So now, I would love to receive pointers on how to integrate Jamulus a bit better into Zynthian. Maybe it could become a special layer, configurable through the web interface?
I have never managed to get Jamulus working in any real world scenario. I like it as an idea but have been disappointed by it many times - shame.
There is of course latency introduced by the distance between nodes but that is increased by overhead in the transmission path, e.g. hops between Internet nodes. The most likely substantial increase in latency is coding and decoding. There are buffers required to reduce dropouts and it takes time to convert audio to a stream that can be transmitted and similar time to decode it. The lower the codec bandwidth the easiest it is to transfer but that adds conversion complexity, hence latency and affects audio quality.
Regarding Zynthian networking there has been some discussion about direct point to point links with a couple of trials. Search this forum.
I would love to see a built-in mechanism for connecting Zynthians over the Internet and have considered how we might do it but it requires significant effort and is likely to yield inconsistent results. The Internet is not yet equipped or configured for the QoS this requires to operate reliably but it is heading that way.
Most people will struggle to play with latency between players exceeding about 30ms which equates to about 9m separation.
Going via any server is likely to introduce some unknown and variable delay which could certainly vary at any given time of day/night.
Connecting via some form of VPN (Virtual Private Network - i.e. point to point) would at least probably eliminate some of the variable performance without a server āin-circuitā. You would, of course, still be at the mercy of other net traffic.
The main issue with any VPN is security of the network nodes.
I am not in a position to offer any further advice at this time of day as I may have had a little ārefreshmentā which may slightly alter my ability to offer good specific instruction how to achieve this. Lol!
Sending audio is always going to add enormous latencies unless new physics are found.
In order to approach latency-free collaboration, NINJAM extends the latency, by delaying all received audio until it can be synchronised with other players. The delay is based on the musical form. This synchronisation means that each player hears the others in a session and can play along with them. NINJAM defines the form in terms of the āintervalā - the number of beats to be recorded before synchronising with other players. For example, with an interval of 16, four bars of common time would be recorded from each player, then played back to all others.
Iām tending to the experimental approach to a lot of this. The simple process of plugging two zynths together with a straight ethernet cable is , probably, the simplest IP based environment you could construct, and to use such a device in an environment with the opportunities and limitations that it provides is something worth trying.
MIDI, with itās separated On & off events is probably as vicious a test of overall network health as one could present. ALWAYS have a physical MIDI kill switch at the rendering end :-D. Digital Audio distortion, from a network is going to be digital, and sadly, that isnāt anywhere near as pleasant an effect as traditional analog distortion. But that not to say there isnāt something creative that might be constructed with performers 9m or 30ms apart . . I donāt know but I look forward to hearing it ā¦
But ignoring the wierd and the wonderful. Defining the controls and functions between a couple of zynths would be of great benefit to the project.
Quite what mechanism runs underneath isnāt really of too much import, hopefully we could make it plug and play at that level with suitable abstraction it might be able to harmoniously translate between different network communities.
But at real basic level ZNetAudio ( & ZnetMIDI) should behave in similar fashions.
Absolute basic functionality.
A source machine can present either itās output or a specific layer to the network.
A source machine appears at the rendering machine as a layer that can be presented to the machine output.
thanks for testing. 50ms latency is very much for making music together. Ok - it depends on the music style. But rhythmic music would be very difficult to play in time togehther. What I have heared is, that about 20ms is ok.
Take a look at this thread:
Yesterday we had tested a point-to-point audio connection: 15.5ms.
Thanks for the pointer. This Network Audio adventure is indeed more or less about the same thing. When I setup a local Jamulus server I get about the same ~15 ms latency, as expected from roughly the same buffering configuration. It somewhat makes sense with a 20 ms ping to a distant server: 15 ms + 2x20 ms = 55 ms.
I have finished integrating Jamulus in Zynthian. @jofemodo, should I send you a series of pull requests?
Nice work @jwoillez. You have already reacted to some of my feedback and of course there is much more which I continue to add to the PR.
I havenāt really had any success with Jamulus. It usually gives poor audio quality and often fails to connect properly. It promises much but in my experience fails to deliver . What is your experience?
Anyone who is trying this out may want to test this out end-to-end. The Zynthian implementation still has some rough edges but some collaboration should get it working enough for a test jam. Any takers?
I think it might be too challenging until the PR is reviewed and merged. I had to jump through some hoops to fetch the PR from @jwoillezās repo and then couldnāt update it with their changes.
Of course this would be even better as a LV2 plugin or similar to expose the interface in Zynthian, e.g. faders for each contributor on encoders.
There was a brief foray into Jamulus during last nightās zynth club. It took a bit of effort to get things working. Three participants used desktop versions of Jamulus and I ran the Zynthian version on a Raspberry Pi 3 Zynthian 3. The server was in London with one participant local to the server. I am in Clacton 80 miles from there. Another is in Manchester 200 miles from London and the other in USA. It kinda worked. The USA machine had an audio routing issue which resulted in USA hearing ourselves and not him. This have a distracting slapback delay effect. Without that link we found that a rhythm produced in London and played against in Manchester was poorly synchronised in London and Clacton. The audio had to travel to Manchester and Clacton. The accompanying piano was playing against the audio that was arriving in Manchester about 90ms late then being sent to London and on to Clacton. So I would have heard the piano much later than the drums. This made it difficult to comprehend and I could not play along.
There were many xruns on the Zynthian but that isnāt unusual with the configuration I was using. The audio quality varied between pretty good (acceptable) to garbled.
Communication was awkward we used Jitsi as an out of bound comms channel but Jamulus takes over PC sound (on Windows at least) so that failed.
Muting your own feed helped because it removed slapback of yourself. This is not currently possible in Zynthian because there is no control of headless Jamulus.
This was a promising start with some frustration. Zynthian really needs some control of a running Jamulus instance. I wouldnāt want to use it for a proper music session.
There are a couple of efforts to optimize Jamulus for Raspberry Pi. My own is based on Ubuntu Studio, and gives an āexternally controllableā Jamulus client, that could in theory be integrated to Zynthian.
Just for another opinion, I am using jamulus on both PC and Mac for long distance rehearsals and even for streaming Reaper projects for my clients when we are doing mixing or pre-mastering of multitrack materials. For all these sessions Iām currently using my personal Jamulus server, that makes the connection much more stable and reliable. So in my opinion, private server for good quality is āmust haveā option. Also it has to be wired connection (probably 5gHz wifi would work too, but I didnāt have an opportunity to try). With proper configuration the delay is between 25 to 30 ms with very small pings - no problem to practice with drums or percussionists.
Using Jamulus together with Reaper via ReaRoute works perfect and lets you even record your rehearsals or jams right into Reaper. If you install Jamulus on your PC and Mac it can automatically store all your sessions in Reaper compatible projects.
All said above is just to support the idea for implementing jamulus option in our zynthians. It would be an incredible option for distanced collaborations.
I was doing some research and also reviewing jamulus integration PRs.
As you say, @riban, we need Jamulus running as an āengineā (an audio generator), so we can control jamulus mixer using CC, like explained here:
1 controller screen for client parameters: mute-me, pan, reverb
1 controller screen per connected musician, including āmeā : fader, mute, solo, group
Regarding the client init options (aka, server URL) , it could be managed via the bank/preset mechanism.
@jwoillez , iām sorry for the delay. Ii really like the concept of Jamulus and it fit zynthian 100%. Iām quite sure it will improve until reaching an acceptable performance. Would you like to improve your integration by implementing a proper zynthian engine?
I could run a āzynthian jamulus serverā so community member can join for jamming at anytime, and yes, for sure, we would connect the PD āgenerative relaxā when nobody is connected!!!
Ha ha! I was inside your head again @jofemodo - I looked at Jamulus at the weekend too and saw the later release introduces JSON-RPC API so we may have more remote control of a headless client.