Friday, April 9, 2010

Self Voicing Apps and Screen Readers could live together

Screen readers are applications that interrogate the desktop, applications, and events, in order to provide users with an aural rendering.

Self voicing applications already render aurally; and as a user interacts with the application, they control what is spoken.

These worlds can collide, and traditionally folks have been leery of self voicing applications. I think of this in the same way as an application might not follow the visual conventions of a given platform/desktop.

But.

What if there was a standard way for screen readers to hook into the TTS system and find out what text is being queued for speech? Then, if the text came from a self voicing application, the screen reader could pull it from the queue and incorporate it as it sees fit.

So like an application should pay attention to "desktop integration", so too an aural application could pay attention to "screen reader integration" except by somehow allowing screen readers to intercept.

Thoughts?

Thanks for reading.