fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing


From: David Olofson
Subject: Re: [fluid-dev] DSP testing
Date: Wed, 31 Mar 2004 22:43:48 +0200
User-agent: KMail/1.5.4

On Wednesday 31 March 2004 19.55, Peter Hanappe wrote:
[...VVIDs...]
> It's true that it would be nice to have an ID of some sort.
> FluidSynth may create several voices for a single noteon so the ID
> should refer to all of them. Fortunately, I introduced the voice
> groups this morning :) So the ID could refer to the voice group.

That depends on what you want to do with them, I guess...

The application<->engine level communication in Audiality is actually 
about "channels" rather than voices - and a channel could be pretty 
much anything. (Currently, there are mono patches, poly patches and 
sequencers. They're all played and controlled using the same MIDI 
style protocol.)


> That would imply scanning voices for group_id instead of an array
> access as what you propose.

That's just an implementation detail. The "VVID manager" is actually 
just a generic handle manager. You allocate a range of handles 
(VVIDs), and then you use them for addressing virtual objects in some 
other context. The point is that you can allocate, use, detach and 
reassign VVIDs without worrying about synchronization. You don't have 
to care whether or not a VVID is actually hooked up to a physical 
object on the other side, and most importantly, you never have to 
wait for actual voices to be released. Voice allocation/stealing can 
be handled entirely in the RT domain, and it's even possible to have 
fake voice objects for "unstealing" and stuff like that.


> Considering the discussion so far, I would suggest the following :
>
> 1) When a noteon event comes in, the user thread calls the
> soundfont object to initialise the voices. The soundfont object can
> do the necessary housekeeping.

This suggests that "initializing SF voices" is not a real time safe 
operation. Sounds like pretty bad news to me... What am I missing? 
What happens when you try to play these sounds from MIDI files using 
the internal sequencer? Isn't the sequencer running in the audio 
thread?


> 2) These voices are pushed into the "voice" fifo.
>     Note that these voice objects are not taken from the voice
>     objects that the synthesizer uses internally. They are taken
>     from a voice heap, let's say.
>
> 3) All voices created in a single noteon event belong to the same
>     voice group. The user thread can choose an ID number for this
> voice group but the ID has to be unique (a counter will do)
>
> 4) In addition to the voices, the user thread sends an
>     "voice_group_on[id]" event in a second fifo, the "event" fifo.

Why separate FIFOs for voice allocation and events? Do events come 
from a different (perhaps more real time) context? If so, what's the 
point, if the whole "note on" operation depends on stuff from both 
FIFOs? (You can't start voices that haven't arrived yet, so unless 
whatever drives the "voice" FIFO is hard RT, NoteOn handling cannot 
be hard RT either.)


> 5) The audio thread, in every loop, picks up the initialized voices
> from the voice fifo and tries to schedule them for synthesis. The
> audio thread may have to kill voices if the maximum polyphony is
> reached. The audio thread actually copies the data from the
> initialised voice into a voice used internally by the synthesizer.
> That way the voice taken from the fifo can stay in the fifo for
> reuse.
>
> 6) The audio thread, in every loop, picks up the events from the
> event fifo. Upon reception of the "voice_group_on" event, the audio
> thread turns on all the voice in that group.
>
> 7) Other possible events include "noteoff[chan]",
>     "update_param[chan,num]", and a couple more.
>
> It turned out to be a bit of a long resume but I hope it is
> understandable.

Sounds overly complex to me, and still has RT issues, unless I'm 
missing something...


[...RT scheduling issues...]
>
> Agreed. (Well, Linux guarantees it. What other OS do we need? ;)

Good point! ;-)


> So FIFOs will be the way to go then.

Frankly, I think it's the only way to come up with solid code and 
still keep the audio thread hard RT safe.

A buffered interface like this also has the bonus of allowing constant 
latency, sample accurate timing and stuff when driving the synth from 
another thread. I do that in recent versions of Kobo Deluxe, to 
ensure that the buffer size *only* affects latency; not sound FX 
timing granularity. Helps a lot, especially on machines that can't 
handle low latency audio.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---





reply via email to

[Prev in Thread] Current Thread [Next in Thread]