fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety


From: josh
Subject: Re: [fluid-dev] Thread safety
Date: Sat, 06 Jun 2009 18:26:55 -0400
User-agent: Internet Messaging Program (IMP) H3 (4.1.6)

Quoting David Henningsson <address@hidden>:
address@hidden skrev:
This could be handled by identifying which "thread" is the synthesis
thread (first call to fluid_synth_one_block).  Any function which might
need to synchronize in the multi-thread case, could check if the calling
thread is the synthesis thread or not and process the events immediately
or queue them accordingly.  This would automatically take care of the
single thread and multi-thread cases, without adding much additional
overhead.

I don't know if it is a big issue, but what will happen if the thread
that calls fluid_synth_one_block changes?

(Imagine a multi-track sequencer with several virtual instrument,
libfluidsynth being one or more of them, and that the sequencer has a
few worker threads that handles rendering of what ever lies first in
their queue.)


Good point. I was assuming that fluid_synth_one_block() would only be executed by one thread at a time per FluidSynth instance. I hadn't thought of the case that it might get executed serially, but by separate threads or by a new thread (if the old thread was killed for example). That scenario seems like a case though, where MIDI events would also be posted by the same thread calling fluid_synth_one_block(). If designed so that fluid_synth_one_block() would function correctly, regardless of what thread calls it, that would just leave the task of determining if an event should be queued or not. This behavior could be configurable (for example a function to instruct FluidSynth to never queue events, that caller will take care that events are serialized with respect to fluid_synth_one_block()). Worst case is that an event gets queued, when it doesn't need to be. Not a big deal really, just non-optimal.

Yes.  The main "synthesis" thread, would be the audio thread, since it
ultimately calls fluid_synth_one_block().  The MIDI thread could be
separate, but it could also be just a callback, as long as it is
guaranteed not to block.

Main synthesis thread's job:

1. Process incoming MIDI events (via queues or directly from MIDI driver
callback, i.e., MIDI event provider callbacks).

2. Synthesize active voices.

3. Mix each synthesized voice block into the output buffer.

Ah, that explains it. It also mainly means that you've come to the same
conclusion as I did and that we're working towards the same goal, we've
just taken different paths to get there. I've worked with the sequencer
as a buffer to achieve this, but that is - as said before - a somewhat
intermediate solution until this is fixed internally the synth, based on
the assumption that Swami, QSynth etc expects the synth to work that way.


Nice to know that we are working towards the same goal! ;)

I should definitely review your changes some more to get a better idea of the details. I'm still getting familiar with the FluidSynth code base and the sequencer is one of the areas I'm less knowledgeable in. Could you explain some more what you mean by "sequencer as a buffer"? This change was made to allow for the sample timer to be used as the source of events correct? Thereby synchronizing the MIDI with the audio?



By the way, this reminds me that the sample timer callback should be
changed to trigger in the beginning of fluid_synth_one_block() instead
of in the end, that way we will gain 64 samples of latency. I'll fix
this ASAP.

#2 is where other worker synthesis threads could be used in the
multi-core case.  By rendering voices in parallel with the main
synthesis thread.  The main thread would additionally be responsible for
mixing the resulting buffers into the output buffer as well as signaling
the worker thread(s) to start processing voices.

Okay. It's hard to know how much we will gain from having more threads
in this case though.


It could potentially double the amount of voices you could have on dual-core CPUs, quadruple on quad core, etc. The voice synthesis is a huge amount of the CPU consumption. Currently the synthesis only runs on one CPU core, effectively half of the total system CPU power on many CPUs today.



I'm following now, thanks for the explanation. If I were you I would
start with the simpler solution (which I assume will be to keep
everything in the synthesis thread) and to implement the rest in a later
step if the simpler solution turns out to be problematic.


Agreed. I'm going to try and keep it simple for the moment. Since MIDI events are already conveniently encapsulated in fluid_midi_event_t structures, it is really easy to just queue these, for the majority of events. There are likely some other types of events that may need structure definitions added. The queue could just end up being an array of unions for the various potential types of events.

The current implementation of being able to modify existing voice
parameters is rather problematic though, when being done from a separate
thread.  Changes being performed would need to be synchronized
(queued).  In addition, using the voice pointer as the ID of the voice
could be an issue, since there is no guarantee that the voice is the
same, as when it was created (could have been stopped and re-allocated
for another event).

I'm not familiar with the public voice API, but if there is one function
that gives out voice pointers, and there is no way to be certain whether
that voice pointer can be used or not at a later point in time, I would
call it a serious public API design flaw. It could be fixed by providing
a callback when a voice is deallocated, or something.


I think it is safe to assume, that the only users of the public fluid_voice_* API are those writing their own SoundFont loaders, for custom instrument synthesis using FluidSynth. This is likely a very small number of programs. The SoundFont loader API is one area which I think could use an overhaul anyways, to add support for things like 24 bit audio and sample streaming. Perhaps this could be one area where API compatibility is broken between 1.x and 1.1. I think it would be a minimal set of software. As far as I know, only Swami uses this feature currently.

I think we should therefore deprecate any public
code which accesses voices directly using pointers, for the purpose of
modifying parameters in realtime.

Either that, or internally queue them if they're called from the wrong
thread, just as the midi events.



But you still end up with the case where the pointer value can't be trusted, as to what voice it belongs to. Keeping the functions used for instantiating voices and replacing those which are used for controlling voices in real time with variants that use a voice ID instead of a pointer, would solve it.


Indeed.  As I wrote above, functions could detect from which thread they
are being called from and act accordingly (queue or execute directly).
If for some reason a queue is maxed out though, I suppose the function
should return a failure code, though it risks being overlooked.

Or block/busy-wait until space in the queue is available?


That thought had occurred to me as well. Perhaps something else that could be configurable via a function call.

I would really like some feedback from the community about these
changes, to ensure they don't change the responsiveness or latency, or
mess anything else up. I've tested it with my MIDI keyboard here and I
didn't notice any difference, but my setup is not optimal.
Sounds great!  It would be nice to put together a test suite for
FluidSynth, for testing rendering, latency and performance.  A simple
render to file case with a pre-determined MIDI sequence would be a nice
benchmark and synthesis verification tool.  I'll look over your changes
at some point soon and provide some feedback.

I'm looking forward to having a test suite, but I hope someone else will
do it ;-)


Yes, me too on both points! ;)

My responses keep getting larger..  I'll be putting my words into code
soon.

That sounds nice. Are you planning a new structure for storing a midi
event in the queue or will you use an existing structure? I have been
working with the event.h structure and I can't really recommend them for
this task (they seem more connected to the sequencer than the synth).

I'll use fluid_midi_event_t for now.


// David



Regards,

Josh





reply via email to

[Prev in Thread] Current Thread [Next in Thread]