speechd-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

retrieving synthesized auio data?


From: Jacob Schmude
Subject: retrieving synthesized auio data?
Date: Fri, 05 Feb 2010 08:51:23 -0700

Hi
That's not streamlined enough for the design I'm going for. This isn't
just for my use, I want to make it as friendly as possible and messing
manually with audio configurations doesn't apply. Too, this is likely to
be used at the same time as a screen reader, so I don't want to get into
a situation where that is accidentally redirected as well. I'd need
specific audio data, generated in a specific order, with no possibility
of outside influences corrupting the audio. For now, I'll just implement
it with espeak instead and do so as cleanly as possible so that
switching to a speech dispatcher method in the future won't be too hard
once sd supports this.
Thanks for all the suggestions

On Fri, 2010-02-05 at 18:28 +1100, Tim Cross wrote:
> If all that is needed is to catch audio output into a .wav file (or similar),
> it can be done very easily with ALSA using a .asoundrc file. For example, this
> is how I had mine configured so that all sound output sent to the audio card
> was captured
> 
> 
>  pcm.copy {
>      type plug
>      slave {
>          pcm "hw"
>      }
>  route_policy copy
> }
> 
> this will generate a wav file, which you can redirect to something like lame
> and generate an mp3 file to save space. By default, it writes to a file in the
> /var hierarchy (see the ALSA plugins docs for more details). 
> 
> Its been about a year since I last used this technique, so can't garantee ALSA
> hasn't changed a bit, so check the docs to be safe. I was using this technique
> to generate mp3 files from speechd and emacspeak reading large buffers of
> data in emacs. What I liked about it was that it was simple and independent of
> either the speech synth being used or the framework i.e. speech-dispatcher,
> emacspeak, orca etc. 
> 
> The only issue I had was getting recording levels right. Had to fiddle with
> alsamixer settngs a bit to get the right levels etc.
> 
> HTH
> 
> Tim
> 
> 
> Halim Sahin writes:
>  > hi Jacob and Luke
>  > 
>  > @Jacob do you need the audio data for further processing?
>  > Or do you need only creating  wave files from the synthesized text?
>  > 
>  > Maybe a good start is to add a dummy audio output driver in speechd
>  > which writes it's
>  > output data into a fifo.
>  > This wouldn't need any api work  and could  be implemented (in my
>  > opinion) really fast and without much work!
>  > 
>  > @Luke:
>  > On Thu, Feb 04, 2010 at 12:04:00PM -0800, Luke Yelavich wrote:
>  > > I intend to write up some roadmap/specification documentation as to
>  > > what I would like to work on with speech-dispatcher next. I think
>  > > first, we get a 0.6.8 release out the door, then start thinking what
>  > > needs major work, to ensure speech-dispatcher is still usable both as
>  > > a system service for those who want it, and for the ever changing
>  > > multi-user desktop environment. 
>  > 
>  > Consider making pulse optional for ubuntu will solve this problem without
>  > any new line code.
>  > 
>  > > One such idea I have, is to consider
>  > > dbus as a client/server communication transport layer. This could even
>  > > go so far as to solve the issue of using system level clients like
>  > > BrlTTY with a system level speech-dispatcher, which would then
>  > > communicate with a user level speech-dispatcher for example.
>  > 
>  > Luke! It's only an issue because you and other prefer the wrong audio
>  > system. i hope one day you start thinking about other stuff to do for
>  > speech-dispatcher than the ..... user session integration.
>  > 
>  > The decision to use pulseaudio (only) for ubuntu produced tons of mails 
> from
>  > many unhappy users in orca/speechd/ubuntu accessibility mailinglists.
>  > Allmost every day some people asking howto use sd as system service etc.
>  > BTW.: it works really well this way!
>  > 
>  > Starting paralel process and let them communicate through dbus will add
>  > more and more and more overhead to speechd and it's deppendencies.
>  > And it will only produce new issues without bringing really new
>  > features instead of complexity.
>  > 
>  > Many other audio apps needs to be rewritten to be compatible with this
>  > new approach. Thx to PA for this.
>  > 
>  > Just my two cents.
>  > 
>  > Halim
>  > PS.: @Luke it doesn't make sense to ignore the user wishes in this area.
>  > Read the mailinglists and talk with the people who are not able to use
>  > pulse with speechd.
>  > Talk also with other a11y projects and speechd users.
>  > 
>  > 
>  > _______________________________________________
>  > Speechd mailing list
>  > Speechd at lists.freebsoft.org
>  > http://lists.freebsoft.org/mailman/listinfo/speechd
> 
> -- 
> Tim Cross
> tcross at rapttech.com.au
> 
> There are two types of people in IT - those who do not manage what they 
> understand and those who do not understand what they manage.
> -- 
> Tim Cross
> tcross at rapttech.com.au
> 
> There are two types of people in IT - those who do not manage what they 
> understand and those who do not understand what they manage.
> 
> _______________________________________________
> Speechd mailing list
> Speechd at lists.freebsoft.org
> http://lists.freebsoft.org/mailman/listinfo/speechd

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: 
<http://lists.freebsoft.org/pipermail/speechd/attachments/20100205/b7c35fb0/attachment.pgp>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]