speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

retrieving synthesized auio data?


From: Luke Yelavich
Subject: retrieving synthesized auio data?
Date: Thu, 4 Feb 2010 12:04:00 -0800

On Thu, Feb 04, 2010 at 11:40:45AM PST, Jacob Schmude wrote:
> Hi everyone
> Is there a way to use speech dispatcher to synthesize audio data, but
> then return it to the calling program instead of outputting it to an
> audio backend?

There is currently no API functions to do this, at least as part of the python 
bindings or the library interface via C. There may be a SSIP command to do 
this, but I doubt it.

I would like to implement this at some point in the future, but due to the way 
speech-dispatcher communicates with clients, it will require some thought. 
Currently speech-dispatcher uses TCP ports to communicate with clients, and 
then uses the SSIP protocol over TCP to send text to be spoken, and to return 
index markers back to the client etc.

I haven't thought what to do beyond adding it to speech-dispatcher. TTSAPI has 
a function to do this, or at least the TTS API document mentions this as 
something that would be desirable. However, since the TTSAPI code hasn't been 
touched since about May 2008 I think, I'd doubt that this has been implemented.

Long term, I think we should implement the API part of the TTSAPI document in 
speech-dispatcher, so I think any future APi additions we make now, should 
follow a similar design to how things are outlined in the TTSAPI document. You 
can find the TTSAPI document here: 
http://cvs.freebsoft.org/doc/tts-api/tts-api.html

I intend to write up some roadmap/specification documentation as to what I 
would like to work on with speech-dispatcher next. I think first, we get a 
0.6.8 release out the door, then start thinking what needs major work, to 
ensure speech-dispatcher is still usable both as a system service for those who 
want it, and for the ever changing multi-user desktop environment. One such 
idea I have, is to consider dbus as a client/server communication transport 
layer. This could even go so far as to solve the issue of using system level 
clients like BrlTTY with a system level speech-dispatcher, which would then 
communicate with a user level speech-dispatcher for example.

Please stay tuned. I am away from home this week, and likely won't get to these 
documents till next week at the earliest. I'll keep this list posted when I 
have things ready for review.

Luke



reply via email to

[Prev in Thread] Current Thread [Next in Thread]