Braid and butter

I found this

http://braid.live

We might use this to implement a sequencer functionality into zynthian.

4 Likes

That does look useful.

How best to integrate it into the layer menu?
An Effects layer…? Doesn’t feel right Is it Special ? but wouldn’t you want to use Special objects with it?

Is it in a separate MIDI layer to match the audio layer…?
How might one sync this all in the existing Layer model…?

The concept of chords is quite fascinating. . .

That’s repetitive and noisy round here . . . :smiley:

1 Like

Maybe a complete new screen.
with areas you can tap to enable a Thread.
Yes, in in the Layers menu you could create that Thread.
How to import MIDI events in to this language might be some work.
But I guess, @jofemodo has his ideas already and will show us tomorrow :stuck_out_tongue:

1 Like

It seems almost tailor-made for step sequencing . . .
I’ve always thought real time recording is to a certain extend a completely different sort of musical device . .

2 Likes

I agree. And I think, that we should stay close to the API when we design the UI.
The disadvantage is, that it is monophonic.
I will talk to Brian House soon and suggest a Lilypond option as pattern.
This would have the advantage of being polyphonic and we could user the midi2ly converter.

@jofemodo, I see the following simple implementation.
We are taking thethe tempo of the MIDI recorder?
Each layer with midi engine has the option of adding a step sequence (Thread)
The step sequencer itself looks like the layer details.
top left: t.velocity
top right: chord-start key (midi learnable)
bottom left: chord-scale
bottom right: pattern
here we list all options of the scale plus 0 (rest) and Remove (last entry) to delete last entry
pressing bottom right encoder adds selected value to the pattern.

With those options we could implement simple threads like

t1 = Thread(1)
t1.chord = D, SUSb9
t1.pattern = 1, 1, 1, 1

Start stopping of threads in the Midi-Recorder panel?

3 Likes

We still lack a frame accurate start and stop mechanism, so until we have that things are always going to be a little bit ‘creative’…

t2.start(t1)                    # keep in phase
>>>

D’oh… of course…
So if the Audio playback device can be an engine rather than a script, we move the audio recorder to being a device that can be cue’d as well as providing a backing track . . .

Do we bless one engine for this sort of functionality . . ? Because we are establishing, effectively, a background audio canvas against which the engines run, and it’s a really simple audio playback/record set up.
If we do fluidsynth, we have to make an sf2 file out of the wav . . .

Just thinking really.