fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing


From: Peter Hanappe
Subject: Re: [fluid-dev] DSP testing
Date: Thu, 01 Apr 2004 11:38:43 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031107 Debian/1.5-3

David Olofson wrote:
On Wednesday 31 March 2004 19.55, Peter Hanappe wrote:

1) When a noteon event comes in, the user thread calls the
soundfont object to initialise the voices. The soundfont object can
do the necessary housekeeping.


This suggests that "initializing SF voices" is not a real time safe operation. Sounds like pretty bad news to me... What am I missing?

Initializing the voices currently is RT safe. In future versions
I would like to give the soundfont more flexibility on the handling
of samples. I'm thinking in particular of streaming samples instead
of keeping all samples in RAM. If you initialise the voices in the
audio thread, the soundfont has no choice: it has to be RT safe.
If you do it in the user thread the soundfont can choose between
dropping a note but reply fast, or loading the sample from disk and
possibly introduce a delay.

What happens when you try to play these sounds from MIDI files using the internal sequencer? Isn't the sequencer running in the audio thread?

No. Nor the sequencer, nor the midi input are run in the audio thread.
They have their own thread.

2) These voices are pushed into the "voice" fifo.
   Note that these voice objects are not taken from the voice
   objects that the synthesizer uses internally. They are taken
   from a voice heap, let's say.

3) All voices created in a single noteon event belong to the same
   voice group. The user thread can choose an ID number for this
voice group but the ID has to be unique (a counter will do)

4) In addition to the voices, the user thread sends an
   "voice_group_on[id]" event in a second fifo, the "event" fifo.


Why separate FIFOs for voice allocation and events? Do events come from a different (perhaps more real time) context? If so, what's the point, if the whole "note on" operation depends on stuff from both FIFOs?

Both the voice allocation and the voice_group_on event come from
the same context. Let me illustrate why I thought the voice_group_on
event is needed. Imagine there is no voice_group_on event. A noteon
creates 2 voices. The first voice is initialized and put into the fifo.
Then the audio thread wakes up and starts the first voice. When the
audio thread finishes, the second voice is initialized and put into
the fifo. Next time the audio thread wakes up the second voice will
be started, but it will have a 64 sample delay with the first voice.

To avoid that, both the first and the second voice aren't started
until the voice_group_on event is received. This event is posted
by the user thread in the noteon function, right after creating and
initialising the voices.



//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---



_______________________________________________
fluid-dev mailing list
address@hidden
http://mail.nongnu.org/mailman/listinfo/fluid-dev








reply via email to

[Prev in Thread] Current Thread [Next in Thread]