-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UI Sounds and focus - consider earcons for Voiced experience #168
Comments
@terracoda I'm not aware of this coming up before.
Since most sims are intentionally designed to be "shallow" with respect to depth - we want people to feel comfortable pushing buttons and seeing what happens, and not worrying about 'messing up' anything, my initial thought is that an interactive vs non-interactive sounds would be helpful, but distinguishing too much further than that would have diminishing returns for users in a situation where the focus is best on "play with it all" rather than taking energy and attention to figure out what everything is semantically. Maybe interactive, non-interactive, and then a variant of the interactive sound that plays when focus lands on a custom object, sense those objects are quite unique (three sounds total). |
The general idea of focused-based earcons seems quite interesting to me, and I'd be happy to build prototypes and try things out. I'm not sure where this would land in terms of my priorities though, and at the moment I am quite limited in the amount of time for a11y in general (20%). Assigning to @terracoda to answer @emily-phet's questions above, and to @emily-phet to set the priority of any sound design and/or development effort to investigate this idea. |
This would be great, but there's no time for it now. Marking as deferred. |
The screen reader experience contains spoken document and interaction semantics, and some screen readers even have some non-speech sounds that indicate certain things - like an element is interactive. These non-speech sounds may be customizable in some way, I am not up to speed on how they work.
Our Voicing feature, does not directly target blind learners, but we do not not necessarily exclude blind learners from a reasonably good experience - especially in a collaborative contexts where learners with and without vision are learning together.
I am wondering if we have considered the use UI sounds that could potentially fire on focus events and thus could potentially communicate something about the object - something as general as it is interactive, or something more specific, like its a button, checkbox, slider, or custom draggable object?
We have a library of UI sounds for activation events - pressing a button, checking a checkbox, picking up a draggable object. I am wondering if we have considered using any non-speech sounds (i.e. earcons) that could fire on focus events?
The reason I am asking is that as the Voicing feature becomes more popular, it might be used by more learners who may be used to hearing spoken semantics. Focused-based earcons might be helpful addition to the Voice experience.
The text was updated successfully, but these errors were encountered: