gnuspeech-contact
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [gnuspeech-contact] Re: Thoughts on GNUSpeech and possible accessibi


From: Jason White
Subject: Re: [gnuspeech-contact] Re: Thoughts on GNUSpeech and possible accessibility applications
Date: Wed, 8 Apr 2009 14:06:09 +1000
User-agent: Mutt/1.5.18 (2008-05-17)

David Hill <address@hidden> wrote:

> You may be interested to check out the paper that describes the "Touch 'n 
> Talk" system that Dalmazio mentioned in an earlier email.  The direct 
> link is:
>
> http://pages.cpsc.ucalgary.ca/~hill/papers/ieee-touch-n-talk-1988.pdf
>
> it is an item on my university web site to which you can also navigate.

Thank you, David, for the reference. This is an interesting paper. It also
reminds me of a related solution, developed at approximately the same time, by
Jim Thatcher for IBM Screen Reader, in which a separate key pad was used for
reading and navigational functions. Although I never had an opportunity to use
it, I recall that one of the principal difficulties in the early versions was
said to be that the system wouldn't automatically read new text presented on
screen, or read text in response to cursor movement - the users had to switch
frequently between the qwerty keyboard and the screen reader's key pad while
interacting with the application software.

The research in which I, personally, find the most insight is that by T.V.
Raman, first in his AsteR software (Audio System for Technical Readings:
http://www.cs.cornell.edu/home/raman/) and then in Emacspeak
(http://emacspeak.sourceforge.net/ and for the latest source code,
http://emacspeak.googlecode.com/).

Emacspeak works best with synthesizers that allow changes to be made
dynamically to voice characteristics, for example the DECTalk, and it would be
interesting to know whether GNUSpeech might eventually support such audio
formatting techniques.

Further, in his latest work at Google on the accessibility of mobile
telephones, Raman has devised a means of making touch screen input achievable
in an "eyes-free" context.

I trust that this digression into speech interface research is not unwelcome
on the list; to ensure that it remains on topic, I have sought to connect it
to functional requirements of a text to speech system.

I think there is a need for a free (as in freedom) tts system capable of
supporting the products of past and current speech interface research, while,
just as importantly, providing opportunities for future research and free
software development efforts.

I also agree with David's observation that many of the most important
requirements are already treated in his 1988 paper, although advances such as
Raman's "audio formatting" techniques create additional, desirable features.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]