fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] DSP testing


From: Tim Goetze
Subject: Re: [fluid-dev] DSP testing
Date: Wed, 31 Mar 2004 19:16:18 +0200 (CEST)

[Peter Hanappe]

>Tim Goetze wrote:
>> [Peter Hanappe]
>>
>>>I overlooked  the case where the audio thread can be interupted, which
>>>can happen if fluidsynth runs without priviledges. You are quite
>>>right that that case poses a problem. A complication I see with the
>>>fifos, though, is that when the user thread has to kill a voice, it send
>>>the 'kill' request to the audio thread and then has to wait for the
>>>audio thread to confirm the request. So you have to introduce
>>>synchronization even if you use lock-free FIFOs.
>>
>>
>> with the FIFO scheme proper, the note <-> voice mapping is done
>> entirely by the audio thread. imagine the audio thread reading a
>> complete MIDI stream and acting on all noteon/off, controller etc
>> events, calling the equivalent of fluid_synth_noteon() itself.
>>
>> if the public interface (the user thread) wants to start a note, the
>> respective function simply writes to the FIFO and lets the audio
>> thread do the rest of the work.
>>
>
>The problem is that the audio thread cannot handle it all. The
>noteon function calls upon the soundfont object to initialize the
>voice. The soundfont object may do all kinds of non real-time stuff,
>in particular loading files. So that has to be done by the user thread.
>A solution would be to make the FIFO a stream of initialised voice
>objects instead of noteon events. And then there could be a second
>stream for events that modify the state of the voices (basically noteoff
>and update_param). I'll take a look at the code how much change that
>would involve.

i was suspecting it would not be all that easy. the initialized voice
stream is OK i guess. i'm usually writing pointers not instances to the
FIFO in such cases.

we'd need another stream with 'killed' voices in this scheme, actually
it doesn't seem so simple to do anymore.

do you think that instead it would be feasible to split the voice
initialization work into non-RT and RT parts?

this way, the user thread could ask the soundfont to prepare the
samples and do whatever else non-RT needs to be done, without actually
touching the voice struct. after this call returns, the user thread
writes the noteon to the stream, and the audio thread then asks the
soundfont to do the rest of the setup, knowing for sure that this call
is RT-compliant.

> > done right once, we'll never have to care about locking/sync issues
> > anymore.
>
>If you don't mind we can continue this discussion (if it's not too
>boring). It should be done right.

it's not at all boring, and i absolutely agree that it should be done
right.

cheers,

tim




reply via email to

[Prev in Thread] Current Thread [Next in Thread]