Accessibility for sight impaired users

12V DC at 2 amps. you can power it with USB only but you loose the flying faders ( understandably). The USB don’t play nicely with zynth’s so loop it in via 5pin DIN’s

I wouldn’t mind researching that controller keyboard.

lew :slight_smile:

FYI They sell sets of 5 Behringer MF60T Motorized Faders

Sweetwater $99 . . . . . . Ebay $45

You might also like to check out the very cool Behringer X-Touch:

This has 9 touch sensitive motorized faders. It is intended for one fader to be a master control, and the other 8 faders to control individual tracks. So in addition to the motorized fader, each of the 8 tracks has a rotary encoder with LED feedback, four illuminated buttons (labelled REC, SOLO, MUTE, and SELECT), an LED level meter, AND a programmable LCD display (which they call an “LCD scribble strip”). They use the touch sense on the fader to switch active tracks in DAW environments, but of course it could be used for anything else. All up there are a total of 92 illuminated buttons.

This would make an awesome control surface for Zynthian. The trick would be to generalise the Zynthian UI so that it can support a variable number of parameters per screen. For the X-Touch this “screen size” would be 8 (or 9 if you don’t care about the missing LCD on the master fader). As you switch parameter screens it would update the control surface, setting the sliders to match the parameters, and setting the names of the parameters on the LCD track displays. For vision impaired users touching the sliders could announce the parameter name. There are additional buttons for directional navigation and a jog wheel, and these could be used to scroll through pages of parameters in the Zynthian UI. And there are still heaps of spare buttons which could be used to switch presets, or for “thumb pistons” as Lew described.

I have an Atruria keyboard for which the same generalised UI approach would work. On the Keylab there are 9 rotary encoders and 9 linear faders, which would give a “Zynthian screen size” of 18. They are not motorised, so the faders would initially be out of sync with the UI, but that’s not necessarily a big problem. There is also an LCD display, some navigation buttons and a jog wheel. Normally, the LCD displays the name of whichever parameter you adjust, or the name of the patch when navigating through patch banks. Arturia are in the business of selling software synthesizers, so these keyboards are quite well integrated into their UI in products like Analog Lab. When you switch patches in the software, it displays a picture of the control surface with each knob labelled with the parameter name. You can select which parameter is controlled by a knob or fader, but they never directly expose more parameters than exist on the control surface.

This looks like a good choice. They are sold as replacements parts for Behringer’s MOTOR keyboards and X-TOUCH consoles, so are quite readily available. They claim a lifespan of 300,000 cycles.

Motorised faders work out about £20 each which would amount to £1000 for 50 without the extra hardware and development cost. That is pretty steep!

An alternative might be to use push / pull solenoids like these from eBay at £2.50 each plus micro-switches and drawbars. 50 of these would cost £125 plus associated electronics and development. The drawbar could be designed with sufficient friction to hold in place but free enough to allow solenoid to move. Micro-switch would detect position. Microcontroller would provide USB or network interface to host.

[Edit] I know they may not be called drawbars on a church organ but it was the work that came to mind as I typed this!

NICE ONE!!! That’s perfect.

are these wired or solder terminal to PCB?

price wise that’s actually not bad.

I had wondered if a 40mm motorised fader existed or not lol.

You’re a genius!

lew :slight_smile:

Max,

Sorry for how I’ve been with you. to be honest, you come up with unusual, unique solutions and I’m really pleased I know you put so much effort in.

I came searching for a particular solution, being a developer and technician, I have a process and a rather mad at times way of doing things, other times I’m as pricese as you could get about something, depends on whether I’m fully on the ball physically or not.

I have a challenge for you. I checked out the details of the faders in question, Mouser has something similar through another brand, works out cheaper and has a 100,000 cycle life. Yes, these would be good, but I thought about something and as nice as these would be, there is a lot to do in prep to install several banks of these. My console design, which is after all, a prototype before I decide on whether to go in to producing unique accessible consoles, this is purely for my own work, as nice as these would be, there’d have to be quite a significant power rail, looms to send to control boards, etc. yep, doable to a fashion, but panel work then takes up so much time and also space. I’d be looking at about 96 in all.

so here’s my challenge for you. Could you help me find some kind of either rocker or on / off switch which has either a solenoid or some kind of motor function to turn on / off either directly or from a controller’s instruction, like a resettable on/off switch or something like that. If I could use something like that, great. OK I’m no electronics engineer, so some of this does go over my head, but the rest makes sense lol.

If you think you know of anything or could find something viable, that’d be amazing.

This control system design for the organ has been driving me mad.

I am in talks over touch screen based accessibility and it is a viable concept and worth considering, but it would be nice to have a headless spec and something a bit more “furniture-like”

lew :slight_smile:

The future possibilities of screen interaction has some real possibilities.

For a system with a fixed interface, there are those gamer button that stick on the screen , giving a tangible control, overlaying a divider “fence” could delineate the areas of a grid layout.

I had heard of vibrating the glass of a display horizontally to provide tactile feedback, perhaps basement engineered gross modulation of an entire screen assembly could provide some touch feedback.

The French company hap2U has developed some fascinating piezoelectric film haptic technology that is generations ahead. In one demonstration he describes being able to ‘feel’ the the android user interface and moving targets in a Fruit Ninja phone app.

Their second generation technology uses 2µm piezo thin film. Providing 1 micrometer motion which is is sensed as change in friction, (not a numbing vibration like cell phone produces)

They describe the concentration of touch sensing in the hand (making braille possible)
One side note: “The feel of touch is about five times faster than vision.

They close the description of adding haptic touch to the surface of various materials with this statement:
“In the future, haptics will be integrated on the plastic armrests of your car, fitted into the design foot of a wooden lamp, bring accessibility to the blind and visually impaired. … the applications are endless.”

Here’s an exploded view if their HAP2U demo Kit, The Xplore Touch with 7" LCD, (some related links on their web site are now dead, it may no longer be available) The image expands out as one scrolls their web page.

They have been at it for a while, some youtube demonstrations go back to 2017.

Related links:

Their slow loading press release PDF

A comprehensive 16 page examination of the technology by Engineers at IEEE TRANSACTIONS ON HAPTICS 2020:
A Review of Surface Haptics:Enabling Tactile Effects on Touch Surfaces
One reference to the effect of moving the surface under your finger:
“The vibration reduces the friction coefficient, a phenomenon called active lubrication”

Tradeshow style Demo video from 2020

Haptics is coming #5 video - What is haptics by Hap2U? 2019

Perhaps you should look into ‘partnering’ with a researcher on a project to address your unique needs, this might provide the leverage to get some equipment donated by hap2U.

hmm, sounds interesting.

Do you have access to an apple iPhone or iPad? I’m asking this so I can demonstrate something to you… If you do, go to “Settings”, then depending on the version of iOS installed, go to Accessibility, turn on VoiceOver. it will tell you how to start using voiceover for a touch environment, but there are a sl guides to this.

VoiceOver doesn’t use vibrate alerts as part of it’s communication, but the general system software does if set to do so, that’s not an accessibility feature but a general mode function for alerts, sounds, etc as well as ringing.

Yes, vibrate alerts are a good thing, but here’s where it’s a bad idea on an organ spec as an example which is where I’m coming from in a build, If a touch screen has vibration, it doesn’t confirm that you’ve actually engaged with a PARTICULAR function, only that you have touched the screen in a position. Say you are using Aeolus as an example, it’s set up in a manner where you could run 2 touch screens, these touch screens show 2 different panels, either the first panel shows an entire stop list for a 3 manual + pedal console and the other display shows a custom programming / Voicing system, or the two displays share the stop list and functions. How would you be able to differentiate to what you’ve engaged ‘ disengaged as a stop, what the stop is, have you added a coupler to the manual or manuals, etc, have you used the sequencer to move between divisional and general pistons? So, vibration response would have to be programmed to give so many levels of pulse behaviour, to indicate various things, a user would get heavily confused.

OK, one way of confirming feedback is a “simple beep” from the display, various instruments from the likes of Korg, Roland, etc use systems like TouchView displays, when the display is touched, the system board gives a simple beep, that’s confirmation you have successfully touched the display, but doesn’t confirm you’ve engaged / disengaged a function or changed a parameter.

If you were blindfolded as an example, no, not offering lol. if you were to engage with a system to find out what’s there, would you rather an interface that vibrates like heck all the time, or would you rather a system speaking out the elements you need to know about? I tink I’d know your answer to that one… don’t you? That’s the whole point about speech assisted technology (TTS) and not STT (speech to text). I’d encourage you for a day to work with a screen reader such as VoiceOVer on the Mac, NVDA / JAWS / WindowEyes on windows, or whatever there is for linux, for a day, blind folded, so you understand the accessibility issues a blind or severely visually impaired person goes through. I think you’d be stunned by this and I wish, I genuinely wish, I’d advised this little piece some time back rather than getting somewhat steamed up lol. This would help you understand the mechanics and adaptive needs. Tactile control differs so much, but the problem isn’t the technology, it’s how it’s applied, how it translates, etc.

A braille display as an example, like my humanware brailliant BI40, provides a non-spoken, pure braille solution for proofing, such as working through a music score, viewing programming language, reading confidential data, etc. it is an output device but also has some input features such as a braille mode keyboard and some basic navigation tools. Yes, it provides good access to the system, but the speed in which you do this is based on two factors, what the braille display translates and outputs in the middle of a document read, such as navigating menus or reading system messages or moving between apps to then grab data from another file, etc, then there’s the user’s own patience. Say you’re in the midst of a developer job, if you turned off the speech engine but just left braille only running, and you’re editing / assessing / fixing particular lines of code in XCode, then all of a sudden the display alerts you to an email or an iMessage received from either a client or team member, your focus with the display has changed. and you’ve initially lost your place in the braille row, so you attend to the matter, come back to the developer suite but the cursor has been saved at state, so the last 2 lower dots (7 and 8) are used as the cursor mark, you can quickly track the last word or script line. Speech output allows you to focus on a few other things such as system announcements which then can be ignored by the braille display to remove process distraction. Interaction with this setup means for me, the laptop’s keyboard and trackpad, then use of the braille display. so I have different input and output requirements. it could be said the same when connecting my iphone to the braille display, difference is having the touch screen, so the touch screen doesn’t vibrate with speech off if the braille display is in use.

It’s a good concept you’ve found, but I really would suggest getting to know. a screen reader suite fully or for the best part, before thinking that a vibratory display is going to help in this situation. You’ll understand why if you spend time with a screen reader for a day or better, a week. you’ll learn so much about it from it’s settings to what functions it offers depending on the OS and screen reader, to it’s weaknesses and believe me, each has weeknesses, some more than others.

lew :slight_smile:

Fascinating stuff. I imagine an attempt to emulate physical buttons with varying friction across the surface, maybe very low friction if moving off a button and very high as you move onto one. Difficult to imagine how effective that would be. Looking forward to experiencing it do real.

I may well get in touch with the dev over this, run some ideas across, see if a possible unit could be sent to test out, if so, I can let you guys know.

lew

Here is an interesting video in which vision-impaired musician Jason Dasent discusses the accessibility features in Arturia’s Analog Lab software. This includes the use of text-to-speech to navigate patches, and to provide feedback of parameters adjusted through a control surface. He is using a KeyLab keyboard which has specialised controls for navigation, and for layering and splitting parts.

Cheers for that, I already know as I’m one of the accessibility consultants for arturia :slight_smile:

good video to listen to though lol :slight_smile:

lew

Having listened to that video again, in full, I feel embarrassed about it’s content. I have to explain how this works. What’s been done is near to exactly the same action of that of Native Instruments Komplete Kontrol. Whats’ been done to the software is taking the Arturia Analog Lab, adding a “system speech UI” element which responds to the hardware functions of a keylab series controller to physically navigate. What it doesn’t do is provide a user with the direct navigation of a screen reader. The video in question misrepresents the true principle of software accessibility where instead of a bolt-on element, the entire UI needs to be created to allow a screen reader like VoiceOver or JAWS for windows to navigate around the plugin or app using the keyboard and mouse / trackpad.

I genuinely cannot believe the comments on youtube praising this action thinking it solves everything. folk have a huge disappointment ahead.

So it’s understood in the best way. Arturia products have NKS functionality embedded, so they work with Native Instruments Komplete Kontrol by default, so one of the Native Instruments Komplete Kontrol series or maschine series can navigate predetermined / compiled functions and have the same data spoken. using the same API. What’s being done here is a move with Arturia to encourage consumers to buy the hardware controllers, which is under scrutiny due to the physical quality control issues of the build of these controllers. Having had first hand experience, I’d rather a Native Instruments Komplete Kontrol S61 MKII or S88 MKII which I’ve owned before.

It just feels so wrong to hear this. I even thought I’d check latest updates to see if anything had changed, loaded a number of my instruments as standalones running VoiceOver and Nothing, not a spoken element to navigate using the keyboard or trackpad. just the same UI in it’s native design. no actual accessibility function add-ons just the system speech ui ported for hardware only.

I’ve used resources like VOCR to set this software up for standalone usage because part of the UI is the setup window, a semi-transparent floating window with no navigation through VoiceOver, so I’ve had to get it to the standard where it’s scanned with VOCR and labelled to interact with.

just thought I’d share this because folk need to be aware of this, not just an arturia marketing stunt, similar to what was pulled by Native Instruments, where I was hired as part of the access dev team for acouple of years and realised what was going wrong.

lew

It is really good to see a manufacturer working towards accessible music making. It would be a momentous task for the relatively small Arturia to create a device that provides full accessibility for any application / module. What they have done to integrate the Keypab with Analog Lab is very good. Of course there is vendor lock-in but that is kind of expected. It is rare for this to not happen in industry and with little or no standard for accessibility interoperability there is little incentive for manufacturers to work on that. It would be great if Apple, Microsoft, Google et al. designed and implemented such an API on their operating systems or better still, a standard that is implemented on all operating systems but I don’t think we are there yet.

I took some inspiration from the video and was pleased that at least one person has benefited from this. I understand this in the same way I understand those millions of people hooked into the Apple ecosystem in that they have integrated products that just work together.

Thanks @stewart for sharing. I hope Zynthian accessibility exceeds this :grinning_face_with_smiling_eyes:.

Hi Brian, there’s already API’s available for accessibility. Apple as a good example have accessibility API’s within XCODE (Swift, AQUA UI and other code resources, repositaries, etc. and as part of registering with Apple to use their developer software and developer service account, you agree to accessibility development testing and integration as part of the project. When developing through XCode, you’re advised that when compiling a project build, to test it’s accessibility structure for Universal Access, including VoiceOver, yes, you can bypass it, but doing so then just shows you’re not interested in equal access to applications being sold for the Mac platform.

Native Instruments and Arturia are two such developers producing combined hardware / software systems with a degree of accessibility. The problem is this asd I’ve seen this first hand. You start an accessibility interface structure within the system / packages, when you examine the package builds and realise that the UI design and coding differ, the challenge is adding the layers needed for interaction with a screen reader such as VoiceOver or JAWS for Windows. Some devs are cross platforming the software so, yes the installers differ slightly, but the same programming language and interface builder are used so that it’s unified in the developer’s construction of the software, but not unified in the way that an OS such as MacOS can take advantage of the design, coding and UI builder tools. So, XCode couldn’t engage with that environment due to another system creating, compiling and testing.

JAVA is an example of the worst possible accessibility in development, No screen reader on this earth can properly engage with the language, nor the UI. Hauptwerk is a wonderful example of combining C+ with JAVA then making the system completely inaccessible to a screen reader UI / encoding environment.

Arturia and I discussed accessibility structures, coding and support back in 2016 with further consultations in 2017. They stated that accessibility was a huge problem for them as they were a small company, not much experience in the field of accessibility implementation. It was very difficult to get a moving process on the go, but we started somewhere. The nearest we had this was merging resources with Native Instruments Komplete Kontrol, which in itself provides a good, stable interface, but it’s dependent on hardware. So what’s now happened is, the same method is utilised using the Arturia Lab software in conjunction with their own controller setup. There’s no real critical differences here. The huge issue is the fact that neither of the apps as standalone environments can be navigated using a screen reader, you can’t go in to a window element as there’s nothing there at all, so you can’t find out what is assigned to controls nor can you program from the screen reader itself. the way this works, just like Native Instruments Komplete Kontrol software is that the System Speech UI and engine take on call-outs from the hardware to announce MIDI-CC / SysEX instructions as well as refer to inbuilt scripts to the lab software, to speak out. That’s what it achieves, nothing other.

Hope you’re alright fella, haven’t spoken in a while.

lew

A Sight Impaired Phenom Worth Hearing…Rachel Flowers

I only just heard of here playing in the Keith Emerson Tribute Concert in 2017
(He died March 11, 2016 at 72)

She is playing one of his pieces, using his Modular Moog in this video (7:45)

.

There is a documentary on her "Hearing Is Believing " here’s the short trailer

You can watch the entire thing on Amazon prime or watch for free on IMDB TV after a painless registration. (7.9 rating)
She briefly mentions using the Jaws screen reader and Sonar to make keyboard settings. Her main adaptive aid seems to be her mom. *** See updated info below**
She mentions how some food flavors evoked musical tones. The most surprising part was her sitting in with the Zappa plays Zappa show in Vegas, playing keyboard (including a xylophone part), sang and played guitar.

There is a live stream style 2 hour video showing her home production technique and adaptive technology, used in making a cover of Frank Zappa’s “Zomby Wolf” in her bedroom (she’s able to chat with mom while laying down tracks)
The 4:15 long finished product suggests there are 8+ tracks.
(Frank’s original rendition) 5:11

The Keith Emerson Official Tribute Aug 13, 2017 Part 1 ( 14:09) . . . . Part 2 (8;24)

2 Likes

Great video of Rachel playing Hoedown. The modular Moog doesn’t look very accessible!

1 Like

Helps to have some helping hands around…

She’s missing out on the drawbar indicator lights on this pipe organ .
(It’s a case where nice hot to the touch light bulbs would have it over LEDs)

1 Like