also one thing i find kinda struggling in the file format of the snapshot, is that the chains are all saved by jack port name, and i find that sometimes (not exactly sure why) between reboots those names aren’t really valid anymore (like TubeDelay-01 that became TubeDelay-00 or things like that) so the restored state is not completely working.
perhaps should be better saving this audio routing between unique ids of layers or something like that?
I do plan to have more robust record of routing within the snapshot. The snapshot name is automatically derived (that could be improved but would benefit from a planned enhancement to be able to name chains,). Snapshots can be renamed in webconf and maybe (soon) in the UI too and certainly will be possible via the API.
One current feature i don’t see named here:
favorites: i think they should be kept, set and unset in the core as well?
we also keep a list of favorites of all the synths (clicking on one will switch the engine of the current layer to the one of the favorite)
how this scenario would be managed?
Favourite is mentioned in the preset section. I expect to be able to query favourites, passing filter parameters to allow the returned set to meet various requirements.
I currently find the placing of plugins within the chain a bit confusing. You can add a plugin and decide if it is in parallel or series but if you have a parallel path already, how does this work? Some (most?) other devices limit the path to a maximum of two parallel paths so you decide which path each plugin lives. Some allow those paths to join and split whilst others just provide two paths, joined at the start and end. Doing this makes the logical design and user experience simpler but of course artificially limits flexibility. I am erring towards a similar limitation because it is simpler to understand and provide UI and anything more complex is either unlikely or may be implemented with multiple chains.
Also should there be a limit to the quantity of slots for plugins. Again, other devices limit the quantity which makes it simpler to present and for users to understand. So the question becomes, if we lay out our plugins in a grid, how tall and wide should that grid be or do we make it limitless and deal with the complexity of presenting this to users?
Opinions please.
16x128 It’s the most MIDI of the options.
I have started to work on a C header api.h defining the calls that a tightly bound client (like a UI or a remote access server) might access. I have created a github branch of zynthian-ui and put this in the zynlibs folder.
This is a work-in-progress and you can see how far I got by where the comments stop and where the functions stop. (I started adding functions based on the proposal above but stopped before the end to start documenting it.) It differs from the proposal where I have found more appropriate or better optimised approach. I will retrospectively update the proposal but please take a look and see if it looks like the right direction.
Particular variation is the idea of having a rectangular grid / graph of engines with horizontal and vertical (actually probably diagonal) signal interconnects forming the wireframe of a chain. The idea would be that you create your chain by manipulating engines within this grid but can override connections manually. Restoring a snapshot could use the grid to recreate the chain the apply modifications to the routing graph based on entries in the saved snapshot.
Hi @riban!
This is really fine stuff! I did a careful reading and have some notes:
-
I would prefer “Volume” instead of “Fader”. A fader is a physical think and can be used in different context for different things. You could have a rare mixer desk with faders for the balance and pots for the volume. Many small mixer desks use pots for “volume” and have no faders, or perhaps have one for “Master Volume”.
-
getPeakProgramme?? Come on, mate!! Let’s use “getPeakLevel” please
-
getPeakHoldA => What “A” means? Why not "getPeakHold?
OK! I know you enjoy introducing this little “audiofolk pearls” but i want to know …
/** @brief Register a callback for mixer state changes
* @param callback Pointer to callback function
* @param bitmap Bitmap flag indication of parameters to monitor
*/
void registerMixer(void* callback, uint32_t bitmap);
only 32 parameters can be monitored? Limiting to 16 mixchans, we still have a lot more Or do you register to all parameters on a strip, having up to 32 strips?
/** @brief Get chain MIDI channel
* @param chain Index of chain
* @retval int MIDI channel [0..15, -1 for none]
*/
int getChainMidiChannel(uint16_t chain);
retval should be a bitmap of 16bit.
We could allow, to chains, listening to several midi channels. or all. Similar for setChainMidiChannel.
/** @brief Add an engine to a chain
* @param chain Index of chain
* @param row Vertical position of engine within chain graph
* @param col Horizontal position of engine within chain graph
* @retval uint32_t Index of engine or 0xFFFFFFFF if engine cannot be instantiated
* @note Engine instance is instantiated with default parameters and connected to adjacent horizontal slots
* @note Replaces and destroys any existing engine at same location in graph
* @note Use special classes JOIN_INPUT, JOIN_OUTPUT, JOIN_BOTH to connect input / output of horizontally adjacent slots to vertically adjacent slots
* @note JOIN classes give hints to autorouter which may be overriden by direct audio/MIDI routing of individual signals
*/
uint32_t addEngine(uint16_t chain, uint8_t row, uint8_t column, string class);
The last parameter “class” is missed in the function doc header, but commented on notes below. I suppose you mean the “join class”.
/** @brief Moves an engine to a new position in a chain
* @param engine Id of engine
* @param chain New chain id
* @param row New row position
* @param column New column
*/
void moveEngine(uint16_t chain, uint8_t row, uint8_t column);
What about adding “copyEngine” or “cloneEngine”?
/** @brief Get the class of engine within chain
* @param chain Index of chain
* @param row Vertical position of engine within chain graph
* @param col Horizontal position of engine within chain graph
* @retval string Class name of engine
*/
string getEngineClass(int chain, int row, int column);
We also would like to get the Engine Class using the engine ID as key.
In fact, at the “engine” level, we shouldn’t know about the grid, or anything about chains. Let’s keep the abstraction layers as separated as possible. Of course, we could have “helper” functions when efficiency or code clarity can be improved, This could be the case.
/** @brief Get MIDI channel for control assigned to engine parameter
* @param engine Id of engine
* @param parameter Name of paramter
* @retval uint8_t MIDI channel
* @todo Do we allow multiple controls assigned to single parameter? If so we need to index them.
*/
uint16_t getEngineParameterMidiChannel(int engine, string parameter);
I think we should improve the MIDI learning mechanism. It’s not too complex and the benefits are worth it. Having several assignments to a parameter could be interesting for some use-cases, listening and binding to a wider range of MIDI events (not CC only) , of course! You have your keyboard with some buttons or pads and you would like to MIDI learn one of your pads to some on/off parameter. no matter the pad sends note or CC events.
/** @brief Add currently selected preset to engine class
* @param engine Id of engine containing current configuration
* @param bank Index of bank (ignored if class does not support banks)
* @param preset Index of preset (replaces existing preset)
* @param name Name of new preset
* @note The parameters and configuration of selected engine are used
*/
void storeEnginePreset(uint32_t engine, uint32_t bank, uint32_t preset, string name);
Some engines doesn’t expose all the parameters: ZynAddSubFX, for instance. On soundfont engines, presets are soundfonts in the current implementation. We should think about this. On this engines we would have “zynthian presets”, that “overdub” the “native preset” with the exposed parameters. Currently it’s working like that in snapshots.
/** @brief Set an engine parameter
* @param engine Id of engine
* @param parameter Name of parameter
* @param value Value of parameter
* @note No change if conversion to naitive data type fails
*/
void setEngineParameterAsString(uint32_t engine, string parameter, string value);
We would like to know the list of labels and tick-values when they are defined. The functions would return a pointer to a constant array.
.
/* Physical UI
Access to switches, encoders, endless pots, LEDs, etc.
*/
.This API section is currently implemented in the refactored zyncoder, that will be renamed to “zyncontrol”. The very basic public API is like this:
//-----------------------------------------------------------------------------
// Zynswitch API
//-----------------------------------------------------------------------------
int get_num_zynswitches();
int get_last_zynswitch_index();
int setup_zynswitch_midi(uint8_t i, midi_event_type midi_evt, uint8_t midi_chan, uint8_t midi_num, uint8_t midi_val);
unsigned int get_zynswitch(uint8_t i, unsigned int long_dtus);
//-----------------------------------------------------------------------------
// Zynpot common API
//-----------------------------------------------------------------------------
int get_num_zynpots();
int setup_rangescale_zynpot(uint8_t i, int32_t min_value, int32_t max_value, int32_t value, int32_t step);
int32_t get_value_zynpot(uint8_t i);
uint8_t get_value_flag_zynpot(uint8_t i);
int set_value_zynpot(uint8_t i, int32_t v, int send);
//-----------------------------------------------------------------------------
// Zynpot MIDI & OSC API
//-----------------------------------------------------------------------------
int setup_midi_zynpot(uint8_t i, uint8_t midi_chan, uint8_t midi_cc);
int setup_osc_zynpot(uint8_t i, char *osc_path);
//-----------------------------------------------------------------------------
Of course this is only the basic API. The full API include low level configuration for chips (GPIO expander MCP23017, low-freq ADC (ADS1115, etc,), step encoders, analog infinite pots, and it’s complemented with the zynaptik and zyntof libraries, when these hardware modules are available. Recently i added little driver for controlling headphones amplifier LM4811.
Currently we have “zyncontrol_xxx.c” files that implement the “init_zyncontrol” function, that initialize and configure the hardware for several configurations:
- zyncontrol_vx.c => Kits from v1 to v4.5, including zynaptik, zyntof, etc. Config is taken from environment variables (envars).
- zyncontrol_z2.c => Z2
- zyncontrol_i2c.c => @riban’s zynthian. It must be created.
This “constructor” or “configurator” should be loaded dynamically instead of statically linked like now. Improving this is not difficult.
You can check the full headers on the zyncoder repo.
FInal Thought:
@riban, we should move to C++ and design an OO API. If well done, performance will be almost the same and we will gain in API and code clarity. Also, current python code is OO because our model fits very well on OO. Finally, porting from Python OO to C++ would be easier. We should …
Cheers!
Thanks @jofemodo mate for such a deep dive into this. I will apply as much of your observation as I directly agree with and consider the rest and maybe discuss further here. Let’s see if I have any answers now…
A fader is not a slider. A mixing desk will commonly have a gain control which adjusts the input gain, some processing and routing then a control of how much signal is mixed into one or more summing buses. That last control is commonly called a fader because it attenuates (fades) the signal rather than amplifying (boosting) like the gain section would. The fader may be on a slider or a rotary control or switches or ethereal mind control, just like other controls. Yes there are horizontal pan/balance controls on some devices but these are not faders, they are still pan/balance just implemented with a slider pot. Anyway - history lesson over - I don’t like volume as it is ambiguous and inaccurate for this control. If fader is undesirable then level may be acceptable. Let’s leave it for a few days and see what storm of abuse we get from the community .
Peak Programme is just something that is ingrained in me from my time at a particular broadcast corporation. It rolls out of my mind and through my fingers onto the screen without me thinking of it. I appreciate it may be a little quaint or esoteric. Maybe there might be some ambiguity with level as that may be used to control as well as monitor the signal with different effect. Programme refers to the content - anyway, you are right and I have changed to peak level.
getPeakHoldA was an editing error. I did functions for each leg (A & B or left & right) but changed it without changing the title. Fixed in my working copy now - thanks. (BTW A/B is also one of those quaint things but it does have advantages over L/R, e.g. you may not actually be using two channels of audio as a stereo pair. Also, you may have more than two channels, e.g. surround sound transport may refer to A,B,C,D,E…)
I need to describe the bitmask for registering mixer in more detail. This is not registering for individual channels (although maybe that may be beneficial - I am not convinced though). This is defining which elements need to be notified, e.g. channel controls, level (programme) meters, etc. There are infrequently changed data like mute which you may want to know in a controller whilst fast moving data such as audio level may swamp your client or the transport mechanism. So this does not limit the quantity of channels but we can consider whether we need a different mechanism for filtering what is registered for notification.
getChainMidiChannel actually needs more consideration. I skipped passed this but should have (and now have) added a todo to review. We currently have the concept of a single MIDI channel controlling a single synth in each layer. We then bolt on a mechanism to clone channels to allow one MIDI channel to control other chains and another mechanism (stage mode) to allow any MIDI channel to control any one synth engine. I wonder if there are scenarios where we might want different MIDI channels controlling different engines within one chain? If so then the MIDI channel association becomes focused on the engine, not the chain. I see your idea of a bitmask makes some sense to allow any quantity of MIDI channels to control a chain but it needs to be more scalable to accommodate different MIDI (or other control protocol) pipes. That might be different MIDI 1.0 inputs or a pipe within a MIDI 2.0 connection, etc. I have added a todo to review this in more detail. We need to do some analysis of core functionality changes here.
addEngine was missing the doc that explained class is the name of the engine class being added to the chain. This in not “join class”. It is for example, “fluidsynth”.
I have added copyEngine. Good call!
getEngineClass was missed when I refactored to use engine id to access each engine. We use getEngine to obtain that id from an engine’s location. Just an error when I was refactoring - fixed now thanks.
MIDI learn - yes, this is an area that may benefit from an additional level of abstraction. We shouldn’t limit ourselves to MIDI control - indeed the core code already supports OSC control. I like the idea of having a control layer with plugins for protocols, e.g. a synth’s filter cut-off may be controlled by a signal in the control layer which may be assigned to MIDI CC, OSC path, CV input, etc. Similar to how we have a graph for audio signals, there should be a similar graph of control signals, not just MIDI signals. This is additional abstraction that needs some consideration to avoid over engineering but provide sufficient extensibility. For now I have added a todo to consider abstracting control layer. Let’s come back to this…
I purposefully avoided touching the low-level data model for snapshot and preset store/restore. At the API level we need to expose the parameters that we can / want and provide mechanism for storing and restoring the current configuration. If this involves overlay due to some lower level technical constraint then we (continue to) implement that within the engine, hiding it from the API. Maybe I miss understood your point but the example with ZynAddSubFX would work similar to how it does now, i.e. saving a preset would store all parameters required to be able to fully restore the preset, maybe as a reference to an existing preset plus customisation - the client doesn’t need to know that.
I have tried to avoid pointers to structures to provide as flexible and abstract interface as I can. Defining language specific constructs starts to constrain the API, e.g. how would that be presented over a HTTP link? To resolve such issues I have offered access the the count of entities and data for each entity allowing a client to iterate the data set obtaining the required data. I have described getting tick-values in a few ways including enumerations and steps. These are within the engine class rather than the engine. You get/set an engine instance’s parameter values but those parameters are described (name, ticks, etc.) by the engine class. Parameter description is common for all instances. I appreciate that some methods may need to move between the sections in the header file. I tried to keep them in the relevant places but there may be some overlap or simply I put them in the wrong place. Let’s review once it is more complete and we start to use it - these things will stand out more obviously.
Physical UI - similar to mixer, the implementation may be in libraries but I am trying to expose everything in one place for the API. (Maybe it would benefit from being in several files but that is simple to do later.) I ran out of time and had to stop so didn’t look at the new zyncoder code in much detail. I will continue… As mentioned above, we may want to implement a control layer in which case the linking to MIDI might be abstracted. Indeed, the physical interface may become a client to the control layer - but let’s think about that… Your point about the low-level access to hardware makes me think this deserves its own API file. Sometimes we must divide and conquor! (So I am skipping this section of your response because it is in the too hard pile. We can circle back to that.)
C++ is a nice programming language. It gives a lot of functionality via libraries that feel clumsy or hard work to implement in C, especially around dynamic data. (C programs tend to use fixed sized data structures which imposed constraints that can be wasteful or limiting.) These enhancements in C++ do come with an overhead though. I worry that my C++ code tends to become lazy and uses features that may be less efficient. In C you tend to see how hard the processor is working or the memory it is using whereas in C++ some magic fairy just does it for you and you don’t see how hard she is working, i.e. how (in)efficient the code is. My C++ code tends to look a lot like C code with sprinkles of helpers where something feels a bit to time consuming to implement myself, e.g. someone has already created vectors so why do it again? That said - I am becoming far more conscious of this and checking the appropriate classes are used. BUT - that is not why I chose a C header. C++ obfuscates its library access in different ways depending on the compiler which is why C++ libraries tend to expose their public methods via a C interface - it becomes more portable. It is easy to wrap a C lib in Python for example or to interface it with other programming languages. We can write our code in C++ but I suggest we expose the library calls via a C export mechanism.
Thanks again for the detailed review. I will continue to update api.h, completing the remaining sections and starting to consider those awkward bits like control layer.
I have done some more work on api.h, making changes related to @jofemodo’s feedback and adding more functionality.
I propose we treat Zynthian as a MIDI 2.0 device with 16 virtual cables, each supporting 16 MIDI channels. We then route and filter internally based on these 256 MIDI channels. We can then define the channels each chain will listen on with a map of 16 x 16-bit words: 1 bit per MIDI channel, 1 word per virtual cable. We can add a MIDI routing matrix to allow any quantity of external (physical) and internal (virtual) cables to be interconnected. I have added the concept of 16 virtual MIDI cables for each chain to the API but at a recent development meeting we agreed to defer implementation of this for a while until we had effort available so the core will initially continue to treat all physical and virtual cables as a single source of 16 MIDI channels.
For MIDI-learn I propose we allow an arbitrary quantity controllers to be mapped to each engine parameter. Each mapped controller has a type, e.g. MIDI, CV, etc. and may be accessed via appropriate functions using the controller’s index. Take a look at how I have done this in api.h from getEngineParameterControls
onwards. Analogue control voltages are mapped similarly to MIDI CC. I have left a TODO to consider how we filter, range, scale such controls. OSC doesn’t get mapped in the same way as MIDI. Each parameter has an OSC path that is created dynamically based on the parameter name and the engine’s position within each chain. The OSC path is exposed to the API but any OSC client can control any parameter directly from the parameter’s OSC path.
I think that configuration of hardware should be excluded from this API. The API should allow control and monitoring for user workflows but not be used to configure the hardware. That should be done in the construction and setup. We can maintain the current use of configuration files and environmental variables for now. Webconf already interfaces well with these. If we want to rationalise that further we can do so but I would prefer to exclude it from this API and defer any such consideration until a later date. To that end I have removed the ability to set the quantity of hardware UI devices. You can still get these quantities within the API.
I have added some of the UI API but don’t yet fully understand it so there are placeholders for fixes.
I used a generic string
datatype which needs to be considered. We could use a string library or present as c-string char arrays. The latter tends to limit character set. This may resolve as we start to implement concrete examples.
I need to add a lot of Done! I also resolved several TODOs and consolidated audio/MIDI play/record into common functions, e.g. register for notification
methods to make the API more event driven. Maybe that will be my next task…startRecorder(MIDI_PLAYBACK, "myfile.mid");
What about GAIN when inputs and LEVEL when outputs ?
As feature request I would ask to add to Zinmixer the ability to create différents outputs mixes profiles for those who own a multichannel usb sondcard.
Peak level: how/where is it computed, I’m curious ?
Gain is the measure of amplification. I would prefer to only use “gain” where it describes an amplifier. You could add a gain plugin to a chain if desired.
Peak level is calculated within the mixer library. Each sample is measured for amplitude. There is a ballistics algorithm to slug the response and give a slower than real-time envelope to the measured value. This has fast attack and slower decay, like you would expect from a meter. The peak level shows the highest level in the past couple of seconds.
I have tweaked the high-level API (first post) to remove the ability to update quantity of hardware elements. This would be done via static configuration, e.g. configuration file. Hardware should be considered static for the purpose of API. A physical change of hardware would require manual change (possibly with configuration tool). This change comes from work being done on the physical interface elements. (Yes - this is actually being worked on!)
Sorry mate, i’m not following you. Could you elaborate a little bit?
Sure!
The hardware of a device seldom changes and when it does it needs configuration. You don’t add or remove buttons often - usually just at design / build time. So I don’t think there is much (any?) advantage to being able to configure the quantity of buttons or lower-level GPI via the API. I don’t think we should set the hardware configuration via API. This should be done with a static configuration such as a configuration file. We could still use a configuration tool like webconf to edit that configuration directly but not expose it to the API.
How do we deal with a plug and play world with devices coming and going with the inevitable USB comings and goings?
If the api is to support third party implementations of status displays and the like then presumably API functionality requires some degree of expected and actual device availability and async calls that can recognise this? The way multiple devices fall back is probably user definable as part of config but ultimately an overall connection status should be accessible and one might want to talk directly to a interface. … Plug & play I2C?
I2C isn’t and really should not be plug and play. USB is a different story but we don’t currently support such hardware interfaces. My comment relates to core hardware, i.e. that which makes up the base unit.
zynthian-steel.local still has a socket for it !
It’s something that might make it to the API docs as a TBA?