fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety


From: David Henningsson
Subject: Re: [fluid-dev] Thread safety
Date: Sat, 06 Jun 2009 20:41:15 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090409)

address@hidden skrev:
> This could be handled by identifying which "thread" is the synthesis
> thread (first call to fluid_synth_one_block).  Any function which might
> need to synchronize in the multi-thread case, could check if the calling
> thread is the synthesis thread or not and process the events immediately
> or queue them accordingly.  This would automatically take care of the
> single thread and multi-thread cases, without adding much additional
> overhead.

I don't know if it is a big issue, but what will happen if the thread
that calls fluid_synth_one_block changes?

(Imagine a multi-track sequencer with several virtual instrument,
libfluidsynth being one or more of them, and that the sequencer has a
few worker threads that handles rendering of what ever lies first in
their queue.)

> Yes.  The main "synthesis" thread, would be the audio thread, since it
> ultimately calls fluid_synth_one_block().  The MIDI thread could be
> separate, but it could also be just a callback, as long as it is
> guaranteed not to block.
> 
> Main synthesis thread's job:
> 
> 1. Process incoming MIDI events (via queues or directly from MIDI driver
> callback, i.e., MIDI event provider callbacks).
> 
> 2. Synthesize active voices.
> 
> 3. Mix each synthesized voice block into the output buffer.

Ah, that explains it. It also mainly means that you've come to the same
conclusion as I did and that we're working towards the same goal, we've
just taken different paths to get there. I've worked with the sequencer
as a buffer to achieve this, but that is - as said before - a somewhat
intermediate solution until this is fixed internally the synth, based on
the assumption that Swami, QSynth etc expects the synth to work that way.

By the way, this reminds me that the sample timer callback should be
changed to trigger in the beginning of fluid_synth_one_block() instead
of in the end, that way we will gain 64 samples of latency. I'll fix
this ASAP.

> #2 is where other worker synthesis threads could be used in the
> multi-core case.  By rendering voices in parallel with the main
> synthesis thread.  The main thread would additionally be responsible for
> mixing the resulting buffers into the output buffer as well as signaling
> the worker thread(s) to start processing voices.

Okay. It's hard to know how much we will gain from having more threads
in this case though.

>> A problem with separating note-on events from the rest is that you must
>> avoid reordering. If a note-off immediately follows the note-on, the
>> note-off must not be processed before the note-on. I guess this is
>> solvable though, it is just another thing that complicates matters a bit.
> If the note-on and off events are originating from the same thread, then
> they are guaranteed to be processed in order, since they would be queued
> via a FIFO or processed immediately if originating from the synthesis
> thread.
> 
> I changed my mind somewhat from what I said before though, that the
> fluid_voice_* related stuff should only be called from within the
> synthesis thread.  Instead, what I meant, was that the fluid_voice_*
> functions should only be called from a single thread for voices which it
> creates.
> 
> It seems like there are 2 public uses of the fluid_voice_* functions. 
> To create voices/start them, in response to the SoundFont loader's
> note-on callback and to modify a voices parameters in realtime.
> 
> I'm still somewhat undecided as to whether there would be any real
> advantage to creating voices outside of the synthesis thread.  The
> note-on callback is potentially external user provided code, which might
> not be very well optimized and therefore might be best called from a
> lower priority thread (MIDI thread for example) which calls the note-on
> callbacks and queues the resulting voices.  Perhaps handling both cases
> (called from synthesis thread or non-synthesis thread) is the answer. 
> The creation of voices can be treated as self contained structures up to
> the point when they are started.

I'm following now, thanks for the explanation. If I were you I would
start with the simpler solution (which I assume will be to keep
everything in the synthesis thread) and to implement the rest in a later
step if the simpler solution turns out to be problematic.

> The current implementation of being able to modify existing voice
> parameters is rather problematic though, when being done from a separate
> thread.  Changes being performed would need to be synchronized
> (queued).  In addition, using the voice pointer as the ID of the voice
> could be an issue, since there is no guarantee that the voice is the
> same, as when it was created (could have been stopped and re-allocated
> for another event).  

I'm not familiar with the public voice API, but if there is one function
that gives out voice pointers, and there is no way to be certain whether
that voice pointer can be used or not at a later point in time, I would
call it a serious public API design flaw. It could be fixed by providing
a callback when a voice is deallocated, or something.

> I think we should therefore deprecate any public
> code which accesses voices directly using pointers, for the purpose of
> modifying parameters in realtime.  

Either that, or internally queue them if they're called from the wrong
thread, just as the midi events.

> Indeed.  As I wrote above, functions could detect from which thread they
> are being called from and act accordingly (queue or execute directly). 
> If for some reason a queue is maxed out though, I suppose the function
> should return a failure code, though it risks being overlooked.

Or block/busy-wait until space in the queue is available?

>> I would really like some feedback from the community about these
>> changes, to ensure they don't change the responsiveness or latency, or
>> mess anything else up. I've tested it with my MIDI keyboard here and I
>> didn't notice any difference, but my setup is not optimal.
> Sounds great!  It would be nice to put together a test suite for
> FluidSynth, for testing rendering, latency and performance.  A simple
> render to file case with a pre-determined MIDI sequence would be a nice
> benchmark and synthesis verification tool.  I'll look over your changes
> at some point soon and provide some feedback.

I'm looking forward to having a test suite, but I hope someone else will
do it ;-)

> My responses keep getting larger..  I'll be putting my words into code
> soon.

That sounds nice. Are you planning a new structure for storing a midi
event in the queue or will you use an existing structure? I have been
working with the event.h structure and I can't really recommend them for
this task (they seem more connected to the sequencer than the synth).

// David





reply via email to

[Prev in Thread] Current Thread [Next in Thread]