fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety


From: David Henningsson
Subject: Re: [fluid-dev] Thread safety
Date: Sun, 07 Jun 2009 10:12:29 +0200
User-agent: Thunderbird 2.0.0.21 (X11/20090409)

address@hidden skrev:
> Quoting David Henningsson <address@hidden>:
>> address@hidden skrev:
>>> This could be handled by identifying which "thread" is the synthesis
>>> thread (first call to fluid_synth_one_block).  Any function which might
>>> need to synchronize in the multi-thread case, could check if the calling
>>> thread is the synthesis thread or not and process the events immediately
>>> or queue them accordingly.  This would automatically take care of the
>>> single thread and multi-thread cases, without adding much additional
>>> overhead.
>>
>> I don't know if it is a big issue, but what will happen if the thread
>> that calls fluid_synth_one_block changes?
>>
>> (Imagine a multi-track sequencer with several virtual instrument,
>> libfluidsynth being one or more of them, and that the sequencer has a
>> few worker threads that handles rendering of what ever lies first in
>> their queue.)
>>
> 
> Good point.  I was assuming that fluid_synth_one_block() would only be
> executed by one thread at a time per FluidSynth instance.  I hadn't
> thought of the case that it might get executed serially, but by separate
> threads or by a new thread (if the old thread was killed for example). 
> That scenario seems like a case though, where MIDI events would also be
> posted by the same thread calling fluid_synth_one_block().  If designed
> so that fluid_synth_one_block() would function correctly, regardless of
> what thread calls it, that would just leave the task of determining if
> an event should be queued or not.  This behavior could be configurable
> (for example a function to instruct FluidSynth to never queue events,
> that caller will take care that events are serialized with respect to
> fluid_synth_one_block()).  Worst case is that an event gets queued, when
> it doesn't need to be.  Not a big deal really, just non-optimal.

Right. As long as we're merely pushing events this will not be a
problem. But what if we're also reading information? E g, assume that a
libfluidsynth user first calls fluid_synth_program_change (which gets
queued) and immediately after, calls fluid_synth_get_program ?

This is not critisism of course, I'm just trying to think every case
through. Perhaps we're better off with a configuration parameter
(synth.threadsafe=false/true) - it should be true by default, but can be
set to false for people using the synth in a single-threaded way.

> I should definitely review your changes some more to get a better idea
> of the details.  I'm still getting familiar with the FluidSynth code
> base and the sequencer is one of the areas I'm less knowledgeable in. 
> Could you explain some more what you mean by "sequencer as a buffer"? 

I should have said "use the sequencer as a queue" since that's the term
we've used up to now. But as a short summary, here's the MIDI thread
call stack:

midi driver -> midi router -> fluid_sequencer_add_midi_event_to_buffer
-> fluid_sequencer_send_at (which stores the event in a queue/buffer).

And here's the audio thread's call stack:

fluid_synth_one_block -> fluid_sample_timer_process ->
fluid_sequencer_process (which pops events from the queue) ->
fluid_seq_fluidsynth_callback -> fluid_synth_noteon (and friends).

> This change was made to allow for the sample timer to be used as the
> source of events correct?  Thereby synchronizing the MIDI with the audio?

Right. Hopefully my call stack example clarifies things for you.

>> By the way, this reminds me that the sample timer callback should be
>> changed to trigger in the beginning of fluid_synth_one_block() instead
>> of in the end, that way we will gain 64 samples of latency. I'll fix
>> this ASAP.

Fixed. (Although I didn't notice any difference in latency myself ;-) )

>>> #2 is where other worker synthesis threads could be used in the
>>> multi-core case.  By rendering voices in parallel with the main
>>> synthesis thread.  The main thread would additionally be responsible for
>>> mixing the resulting buffers into the output buffer as well as signaling
>>> the worker thread(s) to start processing voices.
>>
>> Okay. It's hard to know how much we will gain from having more threads
>> in this case though.
> 
> It could potentially double the amount of voices you could have on
> dual-core CPUs, quadruple on quad core, etc.  The voice synthesis is a
> huge amount of the CPU consumption.  Currently the synthesis only runs
> on one CPU core, effectively half of the total system CPU power on many
> CPUs today.

Okay. Should we have a parameter for the number of worker threads as
well (and try to autodetect if this parameter is not supplied?)?

>> I'm following now, thanks for the explanation. If I were you I would
>> start with the simpler solution (which I assume will be to keep
>> everything in the synthesis thread) and to implement the rest in a later
>> step if the simpler solution turns out to be problematic.
> 
> Agreed.  I'm going to try and keep it simple for the moment.  Since MIDI
> events are already conveniently encapsulated in fluid_midi_event_t
> structures, it is really easy to just queue these, for the majority of
> events.  There are likely some other types of events that may need
> structure definitions added.  The queue could just end up being an array
> of unions for the various potential types of events.

As long as we can have pointers inside, I'm okay with that. If not,
we're preventing ourselves to receiving sysex (which can be of any
length) in the future.

>> I'm not familiar with the public voice API, but if there is one function
>> that gives out voice pointers, and there is no way to be certain whether
>> that voice pointer can be used or not at a later point in time, I would
>> call it a serious public API design flaw. It could be fixed by providing
>> a callback when a voice is deallocated, or something.
> 
> I think it is safe to assume, that the only users of the public
> fluid_voice_* API are those writing their own SoundFont loaders, for
> custom instrument synthesis using FluidSynth.  This is likely a very
> small number of programs.  The SoundFont loader API is one area which I
> think could use an overhaul anyways, to add support for things like 24
> bit audio and sample streaming.  Perhaps this could be one area where
> API compatibility is broken between 1.x and 1.1.  I think it would be a
> minimal set of software.  As far as I know, only Swami uses this feature
> currently.

Breaking the API would lead to debian and others would have to
distribute two versions of the library. I would be glad if we could
avoid that.

>>> I think we should therefore deprecate any public
>>> code which accesses voices directly using pointers, for the purpose of
>>> modifying parameters in realtime.
>>
>> Either that, or internally queue them if they're called from the wrong
>> thread, just as the midi events.
> 
> But you still end up with the case where the pointer value can't be
> trusted, as to what voice it belongs to.  Keeping the functions used for
> instantiating voices and replacing those which are used for controlling
> voices in real time with variants that use a voice ID instead of a
> pointer, would solve it.

But do we really have to change the API for that? Would it be possible
to pretend we're handing out voice pointers (to keep backwards
compatibility) but in reality hand out ID's which we map to pointers
internally?

// David




reply via email to

[Prev in Thread] Current Thread [Next in Thread]