hmm, sounds interesting.
Do you have access to an apple iPhone or iPad? I’m asking this so I can demonstrate something to you… If you do, go to “Settings”, then depending on the version of iOS installed, go to Accessibility, turn on VoiceOver. it will tell you how to start using voiceover for a touch environment, but there are a sl guides to this.
VoiceOver doesn’t use vibrate alerts as part of it’s communication, but the general system software does if set to do so, that’s not an accessibility feature but a general mode function for alerts, sounds, etc as well as ringing.
Yes, vibrate alerts are a good thing, but here’s where it’s a bad idea on an organ spec as an example which is where I’m coming from in a build, If a touch screen has vibration, it doesn’t confirm that you’ve actually engaged with a PARTICULAR function, only that you have touched the screen in a position. Say you are using Aeolus as an example, it’s set up in a manner where you could run 2 touch screens, these touch screens show 2 different panels, either the first panel shows an entire stop list for a 3 manual + pedal console and the other display shows a custom programming / Voicing system, or the two displays share the stop list and functions. How would you be able to differentiate to what you’ve engaged ‘ disengaged as a stop, what the stop is, have you added a coupler to the manual or manuals, etc, have you used the sequencer to move between divisional and general pistons? So, vibration response would have to be programmed to give so many levels of pulse behaviour, to indicate various things, a user would get heavily confused.
OK, one way of confirming feedback is a “simple beep” from the display, various instruments from the likes of Korg, Roland, etc use systems like TouchView displays, when the display is touched, the system board gives a simple beep, that’s confirmation you have successfully touched the display, but doesn’t confirm you’ve engaged / disengaged a function or changed a parameter.
If you were blindfolded as an example, no, not offering lol. if you were to engage with a system to find out what’s there, would you rather an interface that vibrates like heck all the time, or would you rather a system speaking out the elements you need to know about? I tink I’d know your answer to that one… don’t you? That’s the whole point about speech assisted technology (TTS) and not STT (speech to text). I’d encourage you for a day to work with a screen reader such as VoiceOVer on the Mac, NVDA / JAWS / WindowEyes on windows, or whatever there is for linux, for a day, blind folded, so you understand the accessibility issues a blind or severely visually impaired person goes through. I think you’d be stunned by this and I wish, I genuinely wish, I’d advised this little piece some time back rather than getting somewhat steamed up lol. This would help you understand the mechanics and adaptive needs. Tactile control differs so much, but the problem isn’t the technology, it’s how it’s applied, how it translates, etc.
A braille display as an example, like my humanware brailliant BI40, provides a non-spoken, pure braille solution for proofing, such as working through a music score, viewing programming language, reading confidential data, etc. it is an output device but also has some input features such as a braille mode keyboard and some basic navigation tools. Yes, it provides good access to the system, but the speed in which you do this is based on two factors, what the braille display translates and outputs in the middle of a document read, such as navigating menus or reading system messages or moving between apps to then grab data from another file, etc, then there’s the user’s own patience. Say you’re in the midst of a developer job, if you turned off the speech engine but just left braille only running, and you’re editing / assessing / fixing particular lines of code in XCode, then all of a sudden the display alerts you to an email or an iMessage received from either a client or team member, your focus with the display has changed. and you’ve initially lost your place in the braille row, so you attend to the matter, come back to the developer suite but the cursor has been saved at state, so the last 2 lower dots (7 and 8) are used as the cursor mark, you can quickly track the last word or script line. Speech output allows you to focus on a few other things such as system announcements which then can be ignored by the braille display to remove process distraction. Interaction with this setup means for me, the laptop’s keyboard and trackpad, then use of the braille display. so I have different input and output requirements. it could be said the same when connecting my iphone to the braille display, difference is having the touch screen, so the touch screen doesn’t vibrate with speech off if the braille display is in use.
It’s a good concept you’ve found, but I really would suggest getting to know. a screen reader suite fully or for the best part, before thinking that a vibratory display is going to help in this situation. You’ll understand why if you spend time with a screen reader for a day or better, a week. you’ll learn so much about it from it’s settings to what functions it offers depending on the OS and screen reader, to it’s weaknesses and believe me, each has weeknesses, some more than others.
lew