fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing


From: David Olofson
Subject: Re: [fluid-dev] DSP testing
Date: Thu, 1 Apr 2004 12:34:42 +0200
User-agent: KMail/1.5.4

On Thursday 01 April 2004 11.38, Peter Hanappe wrote:
> David Olofson wrote:
> > On Wednesday 31 March 2004 19.55, Peter Hanappe wrote:
> >>1) When a noteon event comes in, the user thread calls the
> >>soundfont object to initialise the voices. The soundfont object
> >> can do the necessary housekeeping.
> >
> > This suggests that "initializing SF voices" is not a real time
> > safe operation. Sounds like pretty bad news to me... What am I
> > missing?
>
> Initializing the voices currently is RT safe. In future versions
> I would like to give the soundfont more flexibility on the handling
> of samples. I'm thinking in particular of streaming samples instead
> of keeping all samples in RAM.

You need a pre-caching scheme to make that RT safe, or even seriously 
useful at all. Have a look at LinuxSampler, which does exactly this.


> If you initialise the voices in the
> audio thread, the soundfont has no choice: it has to be RT safe.
> If you do it in the user thread the soundfont can choose between
> dropping a note but reply fast, or loading the sample from disk and
> possibly introduce a delay.

I don't see why you have to bundle sound loading/selection (usually an 
"off-line" operation) with NoteOn events. This problem doesn't really 
exist in traditional hardware and software synths and samplers, as 
voice initialization is always RT safe as soon as a Program Change 
operation has finished. (Though, there's still a problem with MIDI 
samplers, as there's no standard way of telling when they're done 
loading...)


> > What happens when you try to play these sounds from MIDI files
> > using the internal sequencer? Isn't the sequencer running in the
> > audio thread?
>
> No. Nor the sequencer, nor the midi input are run in the audio
> thread. They have their own thread.

Why? It just complicates things in my experience...


[...]
> > Why separate FIFOs for voice allocation and events? Do events
> > come from a different (perhaps more real time) context? If so,
> > what's the point, if the whole "note on" operation depends on
> > stuff from both FIFOs?
>
> Both the voice allocation and the voice_group_on event come from
> the same context. Let me illustrate why I thought the
> voice_group_on event is needed. Imagine there is no voice_group_on
> event. A noteon creates 2 voices. The first voice is initialized
> and put into the fifo. Then the audio thread wakes up and starts
> the first voice. When the audio thread finishes, the second voice
> is initialized and put into the fifo. Next time the audio thread
> wakes up the second voice will be started, but it will have a 64
> sample delay with the first voice.
>
> To avoid that, both the first and the second voice aren't started
> until the voice_group_on event is received. This event is posted
> by the user thread in the noteon function, right after creating and
> initialising the voices.

I see. I avoid this problem entirely in Audiality by using timestamped 
events, and by running the sequencer and "patch plugins" (what turns 
high level MIDI style events into voice level operations) in the 
audio thread. If I start N voices with the same timestamp, they're 
guaranteed to start playing at the very same sample. No explicit sync 
needed, since everything's in one thread.

That said, the sync issue still applies when playing sound FX and 
stuff from another thread. However, the "standard" solution is to 
implement multichannel sound effects as instrument patches, so they 
can be controlled with single events.

In future versions, I'll allow the use of scripting in the real time 
context as well. That should make it possible to have most of the 
sound FX logic run in audio context, so the game logic can send 
simple commands with no need for "group sync" and the like. (Well, 
that could be implemented as well if desired, of course...)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
|  Free/Open Source audio engine for games and multimedia.  |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---





reply via email to

[Prev in Thread] Current Thread [Next in Thread]