Meanwhile somewhere to the East

That get’s a like all on it’s own…

2 Likes

Another quirky fact I learned about the B3 from some youtube or other: percussion only works on the first keypress from all-up, so if you want to get that sound you need to be very disciplined about keeping your hands “floating” with the finger off each key before you hit the next. Another case where, like you on the Jon Lord stuff, I didn’t understand why it “wasn’t working”. :>

If you can get a replacement through the front door, I cannot recommend the VR series any more highly. Mainly, they put effort into making sure it plays like a B3 should, which is reason enough (I haven’t tried out a Nord yet and I’m very curious how they work, actually - any Nord owners in the room care to comment?), but it also as I said is very hackable with the free ctrlr module, once you get the secret skeleton key software it punches way way WAY above its weight class.

Edit: The short version of the long post below is, I believe that if one of us won the lottery and handed @jofemodo a budget that covered a couple years of his and 2-3 more talented people’s full time work, that he would deliver something every bit as good as the device being hyped in OP, and do it for less.

This is true of so many. My friend Scooter, currently a Eurorack addict, said the other day “I’m glad we cannot access life statistics like time spent setting up gear vs time spent playing said gear.” We would both go down as huge fails, I think, especially in comparison to some of my three-hours-of-scales-per-day bluegrass friends.

No, synthesizers are no indicator of musical ability, and there is indeed an entire subculture of people IMO who are doing something with them which is one degree removed from musicianship. Teenage Engineering is a great example of this - they make these weird Fisher-Price devices and charge ludicrous sums for them, the build quality is not there, and it’s thoroughly enshittified, calling home to their servers, probably putting all your performances through their AI training.

Mostly, though, they make toys, not instruments. But if Teenage Engineering is drinking a six pack and smoking a joint at a party, Eurorack is fentanyl. Yes, that is how synthesizers used to be done, because - and this is the important bit here - there was no other way of doing it. We moved on from that model quite joyfully, much as society moved on from war with muskets. But some people like putting on Union and Confederate uniforms and running around in fields shooting blanks at each other.

As to synthesis power, I think that’s a nothingburger. From the looks of it, you can stack up 16 voices with that Montage device, and yes, that is some serious CPU muscle under the hood, to be sure.

But, supposing you only need two, three, maybe four of those voices, not all sixteen at once?

I think the computing gap, if you do an engine-to-engine, single-voice Pepsi Challenge, is not remotely as stark as you’re making it sound. If we could look at the code we would know, of course, but we have to take their word for it that their “advanced” algorithms are actually some new, deeper technological leap, with accompanying new, faster technology, vs. being merely representative of another few years of work on the code and the standard Moore’s Law leap in available compute that you would expect in the number of years since the last version.

This isn’t even so much me being a Zynthian zealot as me pointing out that these “elite” devices are no longer really that elite, now that everything is a digital model. The engineering behind the analog devices of the 80s was genuinely impressive, and indeed those devices had some real problems when they went bad as well. But engineering the original Prophet 5 was absolutely the work of a lot of strong brains.

The Montage was more a set of practical concerns. Decide on feature set, spec out a CPU and supporting chips that deliver that feature set, do circuit design just like Jofemodo does, put the software together, most of which is the same software in the last three to five devices of this class, with a few updates.

Sequential Circuits, Moog, they had engineers. Modern devices only require IT guys and programmers. The difference between Montage and Zynthian is available budget and human hours.

PureData vs Max/MSP - is one better than the other? Not really. If Max had a genuine advantage, I’m positive the organelle would use Max instead. We are moving away from the age of needing capitalists to get things done, hopefully, cause they really can’t get things done remotely as efficiently as a bit of open hardware and the community of Libre devs (pianoteq excepted) who provide all the engines in Zynthian.

1 Like

I thoroughly agree with the entire post above, including that little snippet I quoted, but, to add to it, I think ARM and RISC-V will be duking it out over the next decade or century for ‘global CPU supremacy’, particularly in the Single Board Computer space. And I think the nunber of cores will quickly increase - so when we can afford to devote cores to it, we will have a ‘core per voice’ machine!

1 Like

I definitely concur with this vision @tunagenes, and it is indeed more than likely that ARM platforms, either SBC or mounted in larger motherboard-centered systems, could prospectively dominate the market of processors in the next decade :slightly_smiling_face:

1 Like

I am certainly on the same brainwave with you @jtode
about this, and that is also why I deliberately eskewed mentioning discrete-circuitry analog synthesis in my argument, since in this case we would be talking about a completely different kind of electronic engineering, which back in the 1970s and 1980s required a lot of real genius, just to be implemented with the then available technology at even vaguely affordable price points.

As far as digital synthesisers are concerned, instead, I am also with you when you point out that - being them essentially computers running DSP code, nested inside a physical interface arranged as a box or keyboard - the only real difference between Zynthian Labs and mid-tier industrial companies like Waldorf, Novation, Modal or ASM (I exclude from the list Sequential, because they operate in a sort of Middle World between analog and digital) is the available regular staff and the number of financially sustainable paid hours.

And yes, it is also noticeable that most releases of digital synthesisers, from the Japanese Big Three, follow an obvious scheme of underlying OS upgrades, on the basis of the same well-oiled algorithms content, constantly augmented in power by the unavoidable extension of computational resources, at the same price point of the previous generation of hardware.

In a less Zynthian-complimentary mood, I would highlight that this project, especially with the prospect of a foreseeable expansion of the RPI processing power in the near future, will need, in the short-to-medium run, to have programmers of synth and FX plugins convinced to develop/port more new software, either free or commercial, for 64-bit ARM Linux.

This is also true for sample libraries, which are notoriously a weak territory for the Linux music-making community. As of now, there is lamentably no way that Zynthian users are able to tap into the completely zero-money treasures offered by the likes of Spitfire Audio or Red Crow Hill, who, arguably discouraged by a relatively limited user base, do not even provide a single Linux version of their proprietary sample players.

I suggest that this is an area of potential improvement, which sooner or later should be tackled somehow by the Zynthian community.

The brief period of four and such voice devices and the fairly limited palette it produced discovered those limits pretty quickly especially when you put genuinely competent musicians on them.

Ironically it was probably the sequencer which discovered a longterm creative niche. As a techie nerd of long standing I can remember trying to get my head round the TTL based note stealing mechanism of a Powertran Polysynth

which rendered the device unplayable from anything that involved hammond like percussive stabs, and not in a nice way.

It was fun to build thou’

I never got fully clear on whether we are able to load commercial sample libraries aimed at Kontakt, I seem to recall that there is a way? Anyways I just needed a good enough piano for rock band stage use and Pianoteq, even at the reduced sample rate, sounds good enough to impress the other piano player in my band enough to buy herself a Zynthian. That’s right, my band has two Zynthian users, that’s gotta be a first of some sort right guys? gaize?

I am not highly swayed by commercial packages not making themselves available here, but I am also not all that concerned with having the absolute best possible samples and whatnot - I just need to know whether I’m audible in the band’s overall mix, and in the right tonal zones for my instrument. I have a tendency to step on the bass player’s frequencies sometimes, when I’m playing B3, for instance.

I think there is a cultural obsession among gearhead musicians with “crafting a tone” which, let’s face it, as far as audiences are concerned it’s like polishing your silverware with an electron microscope. If you’re playing the right notes at the right time with even vaguely the right timbre, they do not care whether you’re playing a real sampled pan flute or a two-operator FM patch. Guitar players, of which I am also one, spend thousands of dollars on boutique tubes and amp mods or vintage reissues or fancy pedals, and as far as an audience is concerned it’s “distortion” or “effects”.

I’m sure that if we put Spitfire Audio’s product and whatever is Zynthian’s closest version of it through a meter and do bit-level measurements, that theirs will be objectively better by any measure one can choose, be it fidelity or whatever. I’m not claiming that ours is better. But, show me a single audience member from one of my shows who can tell whether I’m using their stuff.

There is quite a large gap between what the technolusting mind desires, and what a person who needs to make music requires.

four and such voice devices

I think we might be talking about different things, are you referring to Polyphony? Cause yah, 4-voice polyphony is kinda wank.

I was referring to being able to “stack” different patches on a device, like how we can put a piano and a synth on the same chain, so’s you play a piano song normally and it makes strings for you, that kinda thing.

If I took in the video properly, that Yamaha beast can stack 16 voices in a patch, it has 16 simultaneous working patches, if I read it right, and that is some impressive computing. They definitely packed more than just a Pi under the hood of that thing.

But it’s also not a feat of luthiery, or even instrument programming - it’s just an IT job. “Need to play this many patches at once? We need this many cores, this many channels of audio, a matrix mixer with this many ins and outs…” it’s a laundry list, and all such devices are just laundry lists.

Zynthian is also a laundry list, but one being delivered with sooooo much less necessary capital. Imagine if we had sufficient money to even pay for a year or two of a few [more] competent engineers/coders’ effort on the project. It’s not so much that our device is better so much as how much farther back on the racetrack it started from, and how it absolutely is a peer of any device of similar spec, and it is light years ahead of most devices in its price point for capability (perhaps not UI).

edit: a few more. A few more.

Yep, sorry.

I am certainly not either, but, as a mostly classical composer in overall writing approach, there are times when I would like to test the Zynthian in a virtual orchestration scenario (why wouldn’t I?), taking advantage of excellent free and non-commercial sampled instruments (Labs and Vaults to name a few), which are - by received aesthetic criteria - superior in usability and expressiveness to the average soundfont collection, simply for inherent technical nature and design philosophy of the chosen archive format. I will not expand here on the burden, and allocation of otherwise compositional time, required by the task of building a reliable expression map with normalized dynamics, from a patchwork of combined soundfont instrumental articulations.

While this is an absolutely legitimate and possibly widespread employment of the Zynthian, I would honestly struggle to define it the reference use-case.

I would like to remember that the most prominent composers and conductors, of the last century and a half, have spent countless hours in “crafting the right tone” of their highly trained professional ensembles. Without this attention to the sonic means engaged, for rendering a given musical work through a specific kind of performance, we would have been precluded the famous velvety tone of the Berliner Philharmoniker’s string section in the Mahler symphonies, or - say - the otherworldly melange of Voces 8 singing the music of Whitacre, under the baton of the composer himself. This is equally true for the unique distorted guitar colour of Terje Rypdal, in one of his crossover ECM recordings, or for the signature character of the Tangerine Dream’s synthesiser tones (on which Froese, Baumann and Franke invested disproportionate amounts of working hours).

There are unfortunately other musical contexts, where the poor performer is not afforded this kind of freedom.

I would rather reverse this statement in the following order:

“There is quite a large gap between what a person who needs to make music desires, and what a technolusting mind requires”.

This is precisely why, sometimes, people in this forum suggest - usually in a respectful manner - possibile improvements or new applications of the workflow, for the Z-named open-source musical tool, whose adopters are neither supposed to be IT technicians, to take advantage of its innumerable virtues, nor should be expected to take part in an ideological war, between contrasting paradigms of technological innovation.

Speaking as a tech-head (significant Zynthian developer) and an artist (amateur musician) I will let you know that @jofemodo and I try our best to make Zynthian a musician’s / artist’s tool rather than a technowizards toolkit. On the way to these optimised workflows we have to go through the durge of creating the technological foundations and because we develop in the open (see everything at GitHub) and are keen to share our work early, you poor souls often have the opportunity to test suboptimal or unfinished work.

We want this to be really simple to use but very flexible. Managing the balance between complexity and complication is a challenge we have all the time.

3 Likes

Opportunity eh…?

You do know the peasants are revolting…?

(I hope nobody is reading any rancor into this discussion, by the way - there is none on this end, I’m having fun, let me know if anyone else is not…)

Classical music is indeed a very different world from Rock’n’Roll, and also from the world of working class musicians who do bars, weddings, events, etc. I don’t do indoor gigs anymore cause covid, and also, I do not make my living with music. But most of my friends in this world do, and I have done it in the past.

Now, when you talk about crafting a tone in the classical world, if I read you correctly, you’re referring to the tone of a symphony (swap in the proper word, I have watched all of Dr. B’s Music Theory videos on youtube but that is the extent of my formal training; what I’m referring to with that word is “many humans playing different instruments together”). You would definitely want a “General Midi” device that represents all those instruments fairly accurately, cause you are selecting from a huge palette of many tones.

But we are doing an apples/oranges thingy here, assuming I read you correctly. When I talked about musicians from my world who do that, I mean people who obsess over their guitar signal, own ten different distortion pedals and actively change between them at gigs, etc. I also place modular synth kids in this category, but that’s an opinion.

Yes, there are better and worse distortion pedals, and yes, there are tonal differences between them, and if you’re an actual rock star with a guitar tech on staff and so forth, you might want to dedicate an extra life’s worth of effort on that, especially if you’re playing original music and are known for a certain thing. The Edge, for instance, is an artiste of the echo pedal, so he’s probably got very specific requirements for that. Totally legit.

Any competent guitar player can make any properly-intonated guitar sound good, though, as well. Spending $4k instead of $400 isn’t actually gonna do that much for your “tone”. If you’re like me, and you’re playing Stones, Beatles, Tommy James and the Shondells, for people dancing at their kid’s bar mitzvah or whatever, what you need in order to do that job well is a guitar, an amp, a dirty and clean channel, reverb. It’s not in the wires - it’s in your fingers. People don’t care whether your guitar sounds exactly like a Les Paul with the Bridge Pickup selected through an original Fender Bassman like on the recording - they care whether you do that cool lick during the solo.

At a certain point I consciously let go of my finicky behavior about my guitar tone, but when I first started learning keys, I was pretty obsessed with getting exact sounds when I was learning a tune - my comment about pan flutes, for instance, refers to my attempts to get a pan flute sound for a cover of a song by a Canadian band called Red Rider. I was getting frustrated with it and had a conversation with another keyboard player, and he said basically what I said above, and it was… incredibly liberating. A whole bunch of expensive “needs” just sorta dropped away.

Your desire for accurate reproduction of the tonal characteristics of a whole symphony is indeed a much taller order than a nice-sounding patch for a virtual synth or amp, so sure, you might find one of those workstations like in OP more suited to your purposes. If you need 16+ voices all at once, you are definitely gonna need more power than a Pi is gonna give you.

1 Like

This is noticed. There does not seem to be any buyer’s remorse round these parts. :>

I was at Tangent Animation for its lifetime, we were an Animation studio that used Blender instead of very expensive commercial packages. Produced NextGen and Maya And The Three, both on Netflix, on this toolset. Then we died, due to covid and other factors.

But while we were going, the large sums of money every year that we would have sent to Autodesk, were sent to the Blender Foundation instead, to fund the development of features we wanted in the software.

It seems to me that the same economics could apply to Zynthian - you’re not completely gratis because there’s associated hardware which is not trivial to build or buy, but again, even paying for the pianoteq license I came in under $1k - now, a 76-note Nord Stage 3 at my local shop is $6600. Even if I had to spend another grand on a keybed/midi controls to get a good feel, I still saved myself $4600.

You guys don’t have a Zynthian Foundation, sadly, so there is no avenue for me to hypothetically send you money to add the specific midi filter function we’ve been talking about in another thread, rather than handling it myself, but I’m letting you know that this economic model exists, and the Blender Foundation received at least a couple houndred thousand per year from my employer between 2015-2021 based on exactly the same relationship that I have with you.

Again, not saying I have money to send you to solve my problem right now, not employed. But I think it’s worth exploring, cause again, I saved myself $4600 and got a device that handles all my needs, other than the keybed of course, and whatever it can’t do, I 100% know that I can make it do it, either with love or money (you’re not the only programmers I could hire, this is Linux lol).

2 Likes

Hey @riban :slightly_smiling_face:

I don’t know exactly to whom is addressed your remark, but I don’t think that anyone around here underestimates, or takes for granted, the tremendous work brought forth by you and @jofemodo for the Zynthian development, probably much beyond your theoretical duties in terms of working hours.

It seems obvious that this project is for the Zynthian Labs more a labour of love than a straight job, especially because in your unusual business model you do listen to the customer base, and offer real on-demand help for a whole spectrum of usage scenarios.

As of me, if I weren’t sincerely engaged by this device and platform, I would not spend every now and then on this forum part of the precious time which I painstakingly manage to reserve for writing music, from my other “first” job.

I would like to point you at a previous comment of mine on this thread, about the Zynthian project’s philosophy, which may have slipped under your radar:

I believe that, in an open space of discussion about an open project, every constructive suggestion - providing it is advanced within terms of politeness and respect - should be received positively by definition. It is however perfectly understandable that, at times, managing such a multifaceted enterprise with a small team might let you feel that an undue degree of users’ expectation is targeted at the developers, and not always in a coherent way.

5 Likes

How that is managed is basically the ethos of the zynth…

We all have individual objectives and desires and I know as far as myself and @riban are concerned we had similar ideas and were, to our different abilities, constructing similar devices. We simply fell into step.

Open Source is obviously fundamental to this process, and is such a success, and so obvious to all, that it never really receives the praise it deserves.

I am a firm believer in the the whole is greater than the sum of it’s parts, and welcome any thoughtful involvement from anyone. . .

And I like being part of a bunch.

You fight entropy so much more effectively as a bunch, and that’s where the English concept of ‘you’ fails magnificently. It simply defines the recognition of a distant sentience !

I will now return to honing ineffectual device drivers !

2 Likes

Indeed, it seems to me that this managed to get its start because we already had stuff like ZynAddSubFX and Dexed and the many other choices already out there - it all needed organizing, mainly. People realizing that a fake sine wave that someone created for Free and one that someone got paid to create are both fake sine waves is clearly gonna take a wee bit longer, but I don’t think too much longer, really.

I think I found the first external post…

1 Like

Hammond clones have a “virtual multi contact keybed” https://hammondorganco.com/msolo