speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Speech Dispatcher backend


From: Hynek Hanke
Subject: Speech Dispatcher backend
Date: Mon Sep 4 09:59:52 2006

> There is an inherent difficulty in defining a finite set of "things"
> that might be represented by sounds. Are you looking just to define
> sounds for events? For objects? Both? What are the bounds for the
> set? 

I don't know yet. It is part of the task to determine it. We want to
provide a list of sound icon (or perhaps event) names that screen
readers and other speech enabled applications will be able to use
and share. Of course the applications must allways be able to define new
sound events themselves too.

> Have you thought about supporting more than just simple audio icons?
> We're looking to use continuous looping sounds to represent persistent
> states in LSR. There's also some benefit to having the ability to
> synthesize sounds for particular concepts too. For instance, how about
> a sound to represent the progress of some task (e.g. an audible
> progress bar). It'd be great to be able to use MIDI or FM synthesis
> to, for example, increase the pitch of a sound toward some reference
> value as progress is made toward the terminus. 

This has never occured to me. I think it is a great idea!

> Finally, we're also looking at more experimental concepts such as
> current sound streams and spatial sound. We can currently achieve both
> of these using a free, but closed source library (optional for the
> user to install). It would be great to move to a completely open
> source solution if possible. Is it possible to instantiate two
> references to a speech backend that supports ALSA, speak to both of
> them simultaneously, and have the output mix properly in ALSA using
> dmix? 

We are not yet there. In SSIP all the priorities are designed for
serialized speech (or sound). Actually, assuring this (even for more
applications, or more connections from the same application, speech
is serialized, not overlaping; one application doesn't block other
applications) is one of the main goals of Speech Dispatcher. However, I
can very well imagine a special priority (or set of priorities) used for
spatial sound (resulting in concurent streams mixed via the sound
technology in use). We have already heard that request from some blind
users too.

With regards,
Hynek Hanke





_______________________________________________
Speechd mailing list
address@hidden
http://lists.freebsoft.org/mailman/listinfo/speechd



reply via email to

[Prev in Thread] Current Thread [Next in Thread]