Friday, April 9, 2010

Self Voicing Apps and Screen Readers could live together

Screen readers are applications that interrogate the desktop, applications, and events, in order to provide users with an aural rendering.

Self voicing applications already render aurally; and as a user interacts with the application, they control what is spoken.

These worlds can collide, and traditionally folks have been leery of self voicing applications. I think of this in the same way as an application might not follow the visual conventions of a given platform/desktop.


What if there was a standard way for screen readers to hook into the TTS system and find out what text is being queued for speech? Then, if the text came from a self voicing application, the screen reader could pull it from the queue and incorporate it as it sees fit.

So like an application should pay attention to "desktop integration", so too an aural application could pay attention to "screen reader integration" except by somehow allowing screen readers to intercept.


Thanks for reading.


Richard said...

As an author of support for self-voicing in existing applications (none really committed upstream, but all available as listed below), I think I would greatly enjoy having some way to go "I'll handle the speech of this, thank you." However, I don't necessarily take into consideration all the needs of someone who does require a speech enabled desktop, assuming that a screen reader might provide a more consistent feel. That said, I'm sure more careful application-specific support can do a lot to enhance any user's experience :)

--shamelessness below :D--

I'll note I have a patch for Evolution to optionally read aloud the sender and/or the subject of incoming mail using Speech Dispatcher. Bug 607610.

I've also written a plugin for Rhythmbox to announce the previous and next track's artist and title.
DJAqua (plugin's name) git repository.

I've also written a plugin for Pidgin to announce aloud a message's sender and/or message (preferably just if you're set to away). IMAqua's git repository.

There's also a small, simple shell script which reads aloud the hour on the hour called BellAqua.

Why? Mostly because I like working away from my computer, while knowing what's going on without having to get up and check it. Instead of just getting a new mail noise, I can actually know whether it's important without getting up. Same with the IMs. The songs so I don't have to get up and check which song it is that I just enjoyed so much. I'm aware that they might not all be optimal for public situations, so that's why I can, say, just have the sender of messages spoken aloud instead of the content too, if the sender is less sensitive.

They all use Speech Dispatcher because I hear that's going to replace GNOME Speech eventually. They all have Aqua in their name because that's just me. :)

David Bolter said...

@Richard: that's very cool.

Fastbet said...

I'm now not sure the place you are getting your information, but good topic. I must spend a while finding out more or understanding more. Thank you for fantastic information I used to be searching for this info for my mission. Judi Bola Online