Hi. I searched the forums but didn’t find a clear response about this topic.
I just built a Zynthian mini and I was able to configure it as an USB gadget mode audio device using this info. My Windows 11 PC recognizes it as a playback device (named ‘Source/Sink’).
But I see that Zynthian Oram already had the USB MIDI gadget defined using configfs. Can somebody point me to how/when is/was this done? At installation time? I tried to search the whole files for some hints, but the strings used are too common (ZYNTHIAN).
It seems like it should be possible to create a g_multi gadget instead that could support audio and midi at the same time, but I was only able to find an example using configfs, and not /etc/modules.
Also, as a Zynthian newbie, could somebody help me to confirm if to just pass the audio input through to the output, all that needs to be done is to “Add audio chain”, and then select none? It shows “Audio Input 1,2”, but I’m unsure if they are L/R for the audio coming from PC.
(I posted about this in the Zynthian mini thread, but it seems to me that it’s not specific to the mini)
Audio support in gadget mode - When I last looked, Raspberry Pi did not support audio in gadget (OTG) mode. The link you posted shows that g_audio is supported but not in composite mode (well, only with video in g_webcam, not with g_midi). So, to support our core functionality, we want to maintain g_midi on the OTG port.
JACK (the audio subsystem used in zynthian) required exactly one audio interface for synchronisation. This means that you must chose which soundcard to bind JACK to. So you can’t have JACK bound to both the audio card and g_audio OTG…
…but we do have hotplug USB audio (recently added) which allows us to bind JACK to one soundcard then add other, USB soundcards. There is an overhead for this because each hotplug soundcard must have samplerate converter code running to allow synchronisation (actually, translation) of the audio clocks. (Even at the same nominal samplerate, e.g. 48000 fps, each soundcard actually runs at a slight different rate unless locked with a common clock, e.g. wordclock.)
We only (currently) support USB hotplug, i.e. there is code that monitors for USB soundcards and adds the required SRC module to interface with JACK.
We are working on network audio (AoIP). It aims to interconnect zynthians but may allow a different mechanism for connecting audio between a computer to zynthian.
So, there are many challenges to make this work with few benefits, e.g. not many users want this.
Thanks for your detailed answer. I now see that this would not be easy.
Anyway, just in case I’m able to find something that could be useful in the future, I’ll continue taking a look at this and documenting my findings here.
For now, I maybe was able to configure zynthian to support BOTH midi and audio in composite mode. I found the place where the original usb gadget configuration is done (/zynthian/zynthian-sys/sbin/setup_usb_gadget.sh), and created two additional files: clean_usb_gadget.sh and setup_usb_gadget_audio_in.sh in the same folder. It looks like this files survive a reboot.
That way, Windows now detects both audio AND midi.
If that is implemented really well, with word syncing to DAW as master clock and no resampling, that would be a nice lossless way to record from Zynthian, or maybe even use Zynthians audio interface to let the DAW scream to the outside world.
Sort of an audio / MIDI interface with builtin DSP and synth and MIDI routing! Wow!
But, as you wrote earlier, it may be more trouble than it is worth, because there is always the way to use the DAW analog inputs and MIDI interface and routing already exists.
In the meantime, that’s what I’ve been discovering:
Hotplug USB audo is based on Jack’s alsa_in/out. There is a configured service called /etc/systemd/system/headphones.service that creates a “jack cable” between alsa device ‘Headphones’ and Jack client ‘Headphones’.
Selecting the checkbox in Admin->Audio->RBPi Headphones actually just starts this service (checked that with htop).
In the Hotplug USB Audio option now both Headphones AND audio gadget can be seen. This was unexpected and I’m happy :-).
Now, trying to just start an instance of audio_in (since for now I’m more interested in using zynthian as mixer for both PC audio and its internal sounds), (still) does not work.
# /usr/bin/alsa_in -q 0 -j UAC2Gadget -d hw:UAC2Gadget
WARNING: channel count does not match (requested 2 got 1)
WARNING: period size does not match: (requested 1024, got 512)
selected sample format: 16bit
delay = 3299
Even if it did, I’m not sure if this would be enough to bind everything. In particular, the alsa name I know it’s fine (UAC2Gadget shows in arecord -l), but I’m not sure if the jack client name is relevant.
EDIT:
When selecting UAC2Gadget in in the previous screenshot, two Jack clients appear. I’m feeling like I’m only getting mono audio.
I had to modify my USB Gadget configuration (see attached file in previous posts) to support stereo (p and c_cmask=3) and s.freq. to 48000, to align with the rest of the zynthian configuration.
No need to start alsa_in manually: Zynthian already does that when selecting an input in Hotplug Audio, and with the proper names. Great!
An audio chain had to be created with type None, to link the USB Gadget USB input to the Zynthian audio output (In my case a cheap PCM5102 card).
I need to be careful to first startup Zynthian connected to the USB 5V GPIO power; only later (when Zynthians splash screen shows) connect the USB-c cable.
Now all my Windows PC audio is redirected through USB to the Zynthian and from the Zynthian output to my Neumann speakers.
Latency coming from my custom build QMK keyboard that supports MIDI velocity (HE switches) looks no worse than going straight to the PC. HTop shows no significant CPU load.
For my purposes that’s all I wanted for now (I sold my MOTU M2, and I wanted to check if I can survive without another one). Maybe somebody else could adapt in the future CamillaDSP, for Zynthian to act as a room correction param. equalizer for speakers/spectrum analyzer/VU meter while not in use as a synth/effects unit.
Yes, I was thinking CamillaDSP could fit as an additional FX to add to Zynthian’s existing palette. It’s becoming very popular. RPi based music streamers are including it as a plugin (f.i.: moode) for HiFi speakers room correction.
Not that it’s a priority for me. It was just an idea.