Accessibility for sight impaired users

Hi, I hope someone could help please and I’d really appreciate it if someone could get involved with this. The latest build of Aeolus under Organnery has just been released to run on Raspberry Pi 3B. This is a good thing, it is also being run on Mini PC’s tailored as a custom installation.

I am a blind church organist here in the UK. Aeolus is absolutely amazing from a sound and programming basis. I decided to search to see if someone had produced a port for Aeolus for MacOS and surprise surprise, a version surfaced, not the latest build however.

I would like to be able to run Aeolus on a mac so that my screen reader (VoiceOver - which is built in to MacOS X) can speak out information such as stops, allow me to navigate the interface for programming, etc. I understand that it needs to have JACK as part of it’s structure for running.

Could someone either advise me regarding how to go about a version of the latest build of Aeolus for the Mac, or could someone look in to a compiled installer which installs both Aeolus and JACK or other dependencies as an application installer? If this could be achieved, I’d like to support this as an accessibility specialist, as I say, I’m fully blind myself, so know the issues surrounding life as an organist and the issues of developers like Milan Digital Audio who refuse to support blind and sight impaired organists. Hauptwerk is completely unusable for the blind. Aeolus under linux can be made to speak with some adjustments, etc.

Could someone possibly put me in touch with the developer behind Aeolus if at all possible.

Blessings from the UK.



Hi Lew

Welcome to this community. I am not convinced this is the most appropriate forum for your request but let’s see what we can do to help.

Aeolus is open source software that was last updated to version 0.9.0 in December 2014. Organnery is a company that uses Aeolus as its organ simulator and distributes this for use on a Raspberry Pi device. Zynthian is also a Raspberry Pi based device that includes many engines including Aeolus. Neither of these platforms currently provide much accessible technology for sight impaired users. Neither provide any support for Mac OS.

In theory it should be possible to compile Aeolus to run on a Mac. I have not been able to find anyone who provides a precompiled Mac version. It is not clear who the original author is but there is a github repository at fugalh/aeolus: Aeolus is a high quality pipe organ emulator using additive synthesis. (

It would be fantastic to be able to add assistive technology to the Zynthian but that is currently not on the roadmap and of course can be a challenge if the software is not designed to support this. I wonder if a standard Linux desktop with assistive technology might be the simplest approach to this although of course you are a Mac user so may prefer to use the system you are most familiar with.

Maybe someone here may be a Mac users and have the skills and time to help. Unfortunately I do not have a Mac so am not able to assist beyond these observations. Good luck friend.

I would be very interested to hear about the interaction between zynthian and accessible tools if you had a little time to help us.
I believe this is a massively fruitful area of research to allow very careful description of components in user interfaces.

For instance I would like you to describe how you are accessing this forum and the tools you use. At the end of the day we are subjecting a programme loop to a succession of events derived from various sources. At the moment the zynthian is pretty agnostic on it’s response, being able to respond to MIDI, keyboard and remote stimulation but it lacks indication outside of the GUI screen for feedback.

How might you best interpret a graphical interface for your use? Would you use tones and if so how might you allocate concepts like parameter up or down and select up and down?
Would you find encoders suitable as an interface in them selves?

I hope none of this may seem personal but it’s interesting stuff, and I’ve though a fair bit about it.

The zynthian does have a lot of these elements nailed down and being able to interpret the Aeleous Parameters in the zynthian GUI and navigating intuitively round that interface would be interesting but I’d stear totally away from the sequencer, mostly cos the keyboard in it points the wrong way :slight_smile: (Don’t ask)


The latest source distribution I found Aeolus 0.10.0 - 2018-07-19 ( The Link )
“Permit to do the control of the stops via notes of a MIDI (control) channel”
A standard 8x8 64 button midi control matrix box would cover the stops panel. (perhaps a shared sub-audible tactile feedback pad could indicate the state of the button just pressed.)
OSX install instructions for this distribution ( The Link )

The Zynthian LCD user interface can be remote controlled using computer keyboard keys ( Link ) or with MIDI keys. It may be practical to navigate a set of presets with a sequence of keystrokes.

The remote access utility VNC works with Raspberry Pi, you can have a copy of an organ stops panel on your Mac screen, if there were a visual matrix reader available…

A visually impaired user asked about a screen readers for VNC output , this was the answer in 2005:

VNC is designed to be as “thin” as possible, and so the server simply sends
the viewer changes to the graphics on-screen, without providing any semantic
content such as text, captions, etc. Because screen-reader applications
need to be able to access the text that they are to read, they won’t work
remotely via VNC (nor via any other remote access tool that I’m aware of).

It’s worth noting that VNC screen transmissions described the rectangular area being redrawn. It’s possible that a transmission monitoring utility could identify which button in the simple stops matrix image has just changed.


Organnery ( Link ) is a standalone raspberry Pi installation package for Aeolus
They produced an audition sample of all stops:
Organnery 0.7.4 individual stops walk through 28 minutes long (Youtube Video Link)

Complete stop list (in order played)

Complete stops list :

Man I

Bourdon 16
Montre 8
Gedekt 8
Gamba 8
Flute harmonique 8
Prestant 4
Flute 4
Quinte 2 2/3
Doublette 2
Fourniture IV
Cornet II-V
Bombarde 16
Trompette 8
Clairon 4

Man II

Bourdon 8
Dulciana 8
Rohrflute 4
Principal 4
Nazard 2 2/3
Flute 2
Tierce 1 3/5
Larigot 1 1/3
Cymbale III
Cromorne 8
Trompette 8


Montre 8
Flute ouverte 8
Viole 8
Voix celeste 8
Flute 4
Principal 4
Plein jeux III
Basson 16
Trompette 8
Clairon 4
Hautbois 8


Bourdon 32
Soubasse 16
Bourdon 16
Principal 16
Violoncelle 8
Flute 8
Flute 4
Principal 4
Mixture III
Bombarde 32
Posaune 16
Trompette 8
Clairon 4

1 Like

Hi and thank you for your amazing response.

OK, let’s cover the ground work lol :slight_smile:

I rely on the mac, also iOS, both use VoiceOver which is a screen reader providing a blind user both keyboard and mouse / trackpad based navigation of the MacOS and any applications which have the accessibility pre-requisites added before release. Navigation with a screen reader like VoiceOver on the mac relies on a series of shortcuts, though the mouse or trackpad can also provide feedback. I use both keyboard and trackpad for a number of functions of VoiceOver.

Navigation of an application is based on how the application is structured, say you have an application in a standard structure format, you have the system menu bar which MacOS relies upon regardless, unlike linux where each menu bar is part of the actual application and not a system tied menu resource, so an app in MacOS would have a window or windows, layers where as an example you may have navigation panes, edit fields, etc, toolbars, etc. Navigating panes, objects, etc is a means of keyboard cursor interactions, function navigation to interact with text, a button, a control element such as a slider or input value source, etc. moving from that you stop interacting with the element (backing out) and then navigate to the other functions you interact with. some principles rely on group mode navigation, quick-nav tools, etc, all depends on how an app behaves.

If something like Aeolus were to be taken in to an accessible pathway as an example, I would recommend a UI based purely on buttons with text label identifiers, so each button would be seen by a screen reader so that the text identifier would state what the button was, it would be important that a screen reader can identify if the button is enabled as in Latch state or just a functional interaction like an "OK or “Cancel” button. so the difference is in definition of function. It would be easy to turn Aeolus in to a spoken system.

Interaction with Aeolus can be achieved from my understanding through 2 clear hardware principles.
1: Console based stop / piston interaction. or custom hardware surface like would be developed by companies such as Yaeltex.
2: Touch Screen environment.

One of the downsides of today’s digital consoles is the use of lit tabs where it’s a touch action but no positive physical difference as to engaged / disengaged, same with some drawstop systems. As a blind organist, working with a pure console which has been electrified, the stops whether tab or draw stops are physically noticed by running the hand over the area or finger to define a raised stop as an active voice, couplers, etc. Digital organs except for some custom builds, don’t offer this. Brands like Allen, Rodgers, etc suffer this design issue on basis of cost. to have an actual electro-mechanical or electro-magnetic (piezo) action for tab stops or draw stops means serious cost.

The advantage of Aeolus as a software based environment would be that a control surface responding to basic midi / sysex or other function command principles, could take custom surfaces. Now, for a blind person, it wouldn’t matter in a tactile sense so much if as a good example if a stop switch had a bulb which would generate a temperature change (not hot but a micro-sensitive reaction of the skin), but also a pure focus on the use of the screen reader, if not the screen reader, then Text To Speech Engine as a program-wide resource where you send the audio of the speech engine to a different output (say the computer’s own speakers / line out) which could then be used by the user to monitor functions pressed, etc this wouldn’t be broadcast to the outputs of aeolus as if this were installed in a church, you don’t want to hear from any main speaker suite “Piston “N” active.” you’d give the congregation a heart attack.

The organnery system seems to be using a debian distro, now I’m not a linux geek, perhaps I should become one lol. In order that Aeolus be able to provide an accessible experience for configuration, voicing, etc, then yes, accessibility is indeed vital. This is where I’ve already emailed Raphael at Organnery to raise this and offer my support, so I can’t say other than that.

For the Mac Port however, It would be a real blessing and advantage if a version of this could be built as a package for the Mac, using the standard developer resources and not JAVA / X11 which would then cause some issues of window object accessibility, in fact JAVA is the worst developer tool for the blind and is part of the dev source for Hauptwerk.

I hope some of this information helps.


1 Like

That isn’t the complete stop list and I can tell you for why.

This software will run 6 manuals plus pedal, you can go to over 30 stops per manual. This has already been tested on some rather interesting large format consoles with well over 80 stops here in the UK.

that was only a demo. Aeolus is expansive and vast in what it can do. that is only the start.

Do the stops have variable value of are the binary, on / off?

Would your ideal / perfect world implementation be a physical representation of all the stops? Would it have other information, e.g. preset selection? An 80 stop physical interface may be rather large. Would a banked solution work?

1 Like

Yes, In the zynthian world we are still considering how to provide the visual feedback it the first place, so we can consider any level of machine to brain enlightenment (apologies) .

I like the idea of a stirring version of Tocata & Fugue interspersed with the occasional announcement of the arrival of an email or worse…
I have a logitch ifeel mouse and I have to say it provided a very interesting feedback sensation in that it bumped over window edges and buzzed in the hand at different rates over different buttons. Certainly a separate audio feedback channel, effectively talk back if you will, might seem suitable for the acknowledgements.

Interlude, I was just thinking of how best to communicate a physical manifestation to the body and was trying to think what a musician could wear that would not be intrusive and thought of rings on the finger…So I looked up vibrating rings . . .

I will not be doing that again. :frowning:

Probably as much as I found . . .

Vibrating Sensory Toys for Children with Disabilities and these don’t seem controllable to any large degree.

I wonder what we could contrive with a couple of piezos and a zynthian…? :- :crazy_face: A switch that would vibrate as well, presumably with a LED above.?

I also found this . . . which is just interesting . . .

1 Like

VNC’s of today’s spec are not designed for screen readers. good examples rather embarrassingly include Apple’s Remote Desktop Client and Server. Here’s why:

In order for a screen reader to engage with a VNC, it has to work on the premise of the node system’s shortcut definitions as part of it. It is taking a GUI snapshot. the only way for a VNC to work with a screen reader as I had demonstrated in 2008 was where both systems being macs, were running VoiceOver at the same point, audio source from the client was being fed to the host controller, etc, shortcuts were given permissions of control from host, etc. even then it was slow and buggy.

Today this is still a battle, so the only way is really understanding terminal functions, X11 functions or driving the client system directly.

One of the issues of custom control systems for a console is cost. a good example is this.

My dream spec is a 20 stop per manual console, so, 4 manuals + pedal, = 100 stops. then add couplers, divisional and general pistons, sequence functions, etc. you’re talking a lot. that means to have what’s known as “Positive Feedback” buttons means recallable buttons which require either peizo or electro-mechanical action to physically latch / unlatch stop buttons. Pistons would also be the same in context but would require a control method to auto cancel / lock out other buttons in a piston row as only one piston per manual would be engaged, so an auto-cancel would have to be applied. so this is why most if not all consoles just use momentary switches for pistons / sequencer functions, whether lit or not that is the developer option.

There’s a developer floating called Yaeltex, a very impressive company to get to know. they can produce unique control panels for any musical situation, whether as direct instruments, or control surfaces. I’ve worked on some designs for organ controls for Aeolus / Organteq with some interesting results. To drive aeolus as a “20 stop” per manual console build as an example, requires 5 rows of 2 x 12 lit pads, they provide 20 stops per manual, plus trem and couplers, now bear in mind Aeolus is modular, so trems could be installed wherever within the manuals, you could just have a trem to the swell, swell and choir, or load it. each trem can be adjustable as I’m told over midi CC so assignable pots could be included. My design concept works on 2 custom panels, the first is the stop suite, the 2nd is configured as a master edit pack for divisional and general piston recalls, Trem programming, general commands, etc.

I did also work on a theoretical programming surface, but I would need to understand the entire programming (Voicing) toolkit as to what can be MIDI CC / Sysex, etc assigned but put it this way. a tactile console is fully viable, the difference to tactile and accessible to sight impaired and blind users is what information is made available and how. In this case, engaging the speech system as an output, not only to provide state of stop / piston, but also if you were going in to the guts of Aeolus to voice and customise the system, then that is where the screen reader UI really comes in.

Thank you so much for your well considered input, it shows great thought and a creative mind. just what’s needed.


1 Like

Any stop or piston, basically any controller working as a switch is purely binary.

The difference between a momentary switch and that of a powered switch or control such as a draw stop or tab stop which would be classed as electro-magnetic or electro-mechanical depending on the driver is the fact that a driver of some form is used and requires a separate message which is handled by a driver board.

That’s not a concern so doesn’t need to be examined. The only time that has to be examined is when a console has a motorised stop system and a sequencer system has to give instruction for stops / pistons to change.

I like your idea and investigation of how data can be made available.

If you were working with a console as a headless state, then it would fall to the console with the host computer embedded within the casement to provide a stop per stop announcement or if a divisional piston / general piston be pressed then that particular assigned button would be announced. Now, with divisional and general pistons, a physical console numbers them only. What if you could actually name each divisional piston to state what stops are included as active as a description spoken, whether that’s a divisional or general piston. What if you used the sequencer functions to navigate between banks and you had created a set of banks with different stops, you could name the banks individually instead of 1 - 1, it could be “Church Bank. - Processional” as an example. That could apply not only to console based buttons but to toe studs also.

To examine a different direction which I have a further interest in. touch screen based controls. then this would give a degree of equality to the system.

There’s this attitude about touch screens not being ideal for the blind. Rubbish. the iPad, iPod, iPhone, etc are prefect examples of how a touch screen provides accessibility in every direction of the OS to a blind or sight impaired user.

What if, as a console design, you examine a design from a Hauptwerk based system, where 2 touch screens are installed to the jambs of a console… just as a principle example. You could learn the layout of the touch screen of the console through Aeolus / Organnery, etc and be confident in what you are doing. You would know that a stop was active / inactive without lag because you’re not waiting on a converter board’s data. just relying purely on the touch screen.

To me, that’s a system I’ve been dreaming of. It’s perfectly doable. You could tailor the layout of stops for both displays, include divisional pistons, general pistons, sequencer modes, etc. then what’s better, you could voice an instrument from the display with the screen reader running.

I had examined a programming surface and am about to discuss this with the devs behind Organnery because this system is being successfully used in professional situations such as churches here in the UK.

I have always wanted to work with Hauptwerk, in fact, I’ve lost count of the times I’ve had that software promoted to me by other organists and demanded I use it. I wish I could, I genuinely wish I could. I wish other blind and severely sight impaired organists could, but Milan Digital Audio refuse to do this, They just don’t care. For years, Myself and others have tried our hardest, presented cases, evidence, etc, They just don’t care. So, they’ve lost money with the disabled music industry and any decent representatives willing such as myself as advocates in this industry. So, presently, I’m using Modartt’s Organteq which is a stop gap, no console here at present as I’m in the process of having this built with some help.


This is an element not to be dealt with. Vibration control isn’t a viable solution to a console environment with multiple functions assigned. The cost of particular hardware as an example is not viable. We need to concentrate purely on providing a spoken environment. the tactile environment is the responsibility of the console itself or a developer build a midi console, etc.

Thank you for making me laugh so much my sides split. I never thought someone could do that level of research and follow that rather dirty rabbit hole. (hole… oops)

Lew :slight_smile:

If I have but one purpose :slight_smile:

1 Like

Someone had to lower the tone lol :wink:

1 Like

Ok Two purposes… :smiley:

Well, get ready for blonde moment 101…

I wondered what Zynthian was in the start of all this. decided on a search. HOLY HEAVEN ON EARTH!!! THAT’LL DO! That’ll do nicely indeed.

Now this is the way forward and a project worthy of a screen reader system running. Do you know how annoying it is to work with a workstation like a korg kronos, yamaha montage, etc with a touch screen, no screen reader support, constantly asking someone to read / describe a screen and work out a ridiculous layout? That is hell for me. This would be the cure. Who can I get in contact with by email as the developers behind this to put a case forward?

You are talking directly into their ears now.

The chief honcho is @jofemodo. The rest of us are his minions. FYI, @wyleu and I are also in the UK. Jofe is Spanish.

This could be approached in that traditional, possibly suboptimal way of implementing a screen reader for the main UI feeding the Zynthian’s headphone output but may I suggest a more radical approach which uses the API to provide a more bespoke / universal access with keyboard driven command and voice feedback? This may be an undesirable approach as it requires dedicated development effort but my experience in assistive technology informs me that bolt-on solutions are often less effective than workflow based approach.

1 Like

Thank you :slight_smile:
I like your thought process. however, the use of keyboards for data entry to a UI like this isn’t going to be a good magnet for attention. The touch UI alone is viable using a built in screen reader to handle the touch layer, manipulation of data from the controllers can also be spoken, so leaving the keyboard element out and manipulating this from the UI itself would be the best way forward.

I’ve been there as a synth workstation owner over the years, only 2 machines ever gave me a degree of proper support, both kurzweils, but to edit them meant using a third party solution to a mac / ipad., yes, good, but downside was an extra tool to carry.

Something like the Zynthian could be a positive step forward. if a screen reader layer was added.


I have done a quick and dirty hack to test a proof of concept and have got my Zynthian announcing the screen title as each is opened. The announcement is issued through the headphone outlet whilst synth sounds issue through the main audio outputs. This is using the application espeak piped through aplay. For more complete integration there would need to be some more code wrapped around the various workflows, e.g. to read the selected item in lists, the available or adjusted control and value, etc.

I don’t think this is quite what you had in mind. Zynthian does not by default display the engine user interfaces. Instead it displays a menu system giving access to various system parameters including the sound engine paramters via pages of 4 at a time. This approach would provide spoken feedback of the navigation through Zynthian using its four rotary encoders.

Even if this does not meet your requirements I think it may be a useful addition to Zynthian, making the standard UI more accessible to visually impaired users. It might also be useful for remote control where the user interface is not visible or too small / far away. This probably warrants a feature request - I may add it to the issue tracker (but after my wedding anniversary meal which is imminent).

1 Like