fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Thread safety


From: jimmy
Subject: Re: [fluid-dev] Thread safety
Date: Sat, 6 Jun 2009 10:24:11 -0700 (PDT)

> Date: Thu, 04 Jun 2009 20:10:10 -0400
> From: address@hidden
> 
> Yes.  The main "synthesis" thread, would be the audio
> thread, since it  
> ultimately calls fluid_synth_one_block().  The MIDI
> thread could be  
> separate, but it could also be just a callback, as long as
> it is  
> guaranteed not to block.
> 
> Main synthesis thread's job:
> 
> 1. Process incoming MIDI events (via queues or directly
> from MIDI  
> driver callback, i.e., MIDI event provider callbacks).
> 
> 2. Synthesize active voices.
> 
> 3. Mix each synthesized voice block into the output
> buffer.
> 
> 
> #2 is where other worker synthesis threads could be used in
> the  
> multi-core case.  By rendering voices in parallel with
> the main  
> synthesis thread.  The main thread would additionally
> be responsible  
> for mixing the resulting buffers into the output buffer as
> well as  
> signaling the worker thread(s) to start processing voices.
> 
> 
> > I'm somewhat following your discussion about queues
> and threads but I'm
> > a bit unsure which cases different sections apply to.
> >
> 
> 
> I'm trying to take care of all those cases :)  The
> single core case  
> would incur slight additional overhead from what it is now
> (to check  
> the thread origin of an event), but I think that would be
> very tiny  
> and it wouldn't suffer from the current synchronization
> issues when  
> being used from multiple threads.
> 
> 
> >
> > A problem with separating note-on events from the rest
> is that you must
> > avoid reordering. If a note-off immediately follows
> the note-on, the
> > note-off must not be processed before the note-on. I
> guess this is
> > solvable though, it is just another thing that
> complicates matters a bit.
> >
> 
> If the note-on and off events are originating from the same
> thread,  
> then they are guaranteed to be processed in order, since
> they would be  
> queued via a FIFO or processed immediately if originating
> from the  
> synthesis thread.
> 
> I changed my mind somewhat from what I said before though,
> that the  
> fluid_voice_* related stuff should only be called from
> within the  
> synthesis thread.  Instead, what I meant, was that the
> fluid_voice_*  
> functions should only be called from a single thread for
> voices which  
> it creates.
> 
> It seems like there are 2 public uses of the fluid_voice_*
> functions.   
> To create voices/start them, in response to the SoundFont
> loader's  
> note-on callback and to modify a voices parameters in
> realtime.
> 
> I'm still somewhat undecided as to whether there would be
> any real  
> advantage to creating voices outside of the synthesis
> thread.  The  
> note-on callback is potentially external user provided
> code, which  
> might not be very well optimized and therefore might be
> best called  
> from a lower priority thread (MIDI thread for example)
> which calls the  
> note-on callbacks and queues the resulting voices. 
> Perhaps handling  
> both cases (called from synthesis thread or non-synthesis
> thread) is  
> the answer.  The creation of voices can be treated as
> self contained  
> structures up to the point when they are started.
> 
> The current implementation of being able to modify existing
> voice  
> parameters is rather problematic though, when being done
> from a  
> separate thread.  Changes being performed would need
> to be  
> synchronized (queued).  In addition, using the voice
> pointer as the ID  
> of the voice could be an issue, since there is no guarantee
> that the  
> voice is the same, as when it was created (could have been
> stopped and  
> re-allocated for another event).  I think we should
> therefore  
> deprecate any public code which accesses voices directly
> using  
> pointers, for the purpose of modifying parameters in
> realtime.  We  
> could instead add functions which use voice ID numbers,
> which are  
> guaranteed to be unique to a particular voice.  I'm
> not sure how many  
> programs would be affected by this change, but I know that
> Swami would  
> be one of them.
> 
> >> No, resizing would not be possible.  It would
> just be set to a compile
> >> time maximum, which equates to maximum expected
> events per audio
> >> buffer.  I just implemented the lock-free
> queue code yesterday, using
> >> glib primitives, though untested.
> >
> > That would apply to case 3 and 4 (live playing), but
> for case 1 and 2
> > (rendering) I would prefer not to have that
> limitation. I'm thinking
> > that you probably want to do a lot of initialization
> at time 0. But
> > perhaps we can avoid the queue altogether in case 1
> and 2?
> 
> 
> Indeed.  As I wrote above, functions could detect from
> which thread  
> they are being called from and act accordingly (queue or
> execute  
> directly).  If for some reason a queue is maxed out
> though, I suppose  
> the function should return a failure code, though it risks
> being  
> overlooked.
> 
> 
> >
> >>>> Sure, if it improves things in the short
> term, go ahead add it.  Fixing
> >>>> FluidSynth's threading issues, and doing
> it right, is likely going to be
> >>>> a bit of a larger task than doing simple
> fixes.  So it might be good to
> >>>> try and address the more severe issues,
> while coming up with a long term
> >>>> solution.
> >
> > I've done so now. I did it in two steps, first all the
> underlying work
> > that enables the sequencer to work as a buffer for
> MIDI threads
> > (revision 193), and enabling that feature for
> fluidsynth executable
> > (revision 194). When the synth has better thread
> safety on its own, we
> > revert 194 only.
> >
> > I would really like some feedback from the community
> about these
> > changes, to ensure they don't change the
> responsiveness or latency, or
> > mess anything else up. I've tested it with my MIDI
> keyboard here and I
> > didn't notice any difference, but my setup is not
> optimal.
> >


Here's my my comments on special effects  processing (SFX, for short here) in 
sound synthesis.  But since Midi allow for some live manipulation of some 
paramenters, it may allow for per channel manipulation, too?

I guess one way to describe it is how many "SFX processors" can be used 
concurrently in a chained sequence.  JackRack is one such implementation where 
each plugin is 1 SFX processor.  Each of these SFX processor parameters can 
change in real-time.  I would love to have SFX processing for individual Midi 
channel (individual solo instrument pan, sustain, delayed echo, vibrato...), or 
all channels combined audio signal (like reverb, delayed echo, pitch shift).  
But I guess individual solo instruments with SFX could be simulated by running 
in a separate instance of FS.

So if per channel SFX is possible, please do allow for ways to specify which 
channel(s) these SFX processors should work on.  You may also want to allow 
users to define/specify, or loaded-on-the-fly (maybe some sample code to use) 
these SFX processors from configuration files, and let parameters be changed in 
real-time.

The simplest case is 0, or 1 SFX processor.  More complicated are 2, 3, or more 
sound effects chained together, but too many of these chained together will 
introduce audio lags of some sort.

Of course, you can impose or set a limit on the number of SFX processors, i.e 3 
max per voice/channel, 8 total...  Or if no specific SFX limit is imposed, at 
least give a word of warning so people don't have wild ideas that they can 
chain 5-10 SFX per channel for 32 channels... or 100 SFX together and still get 
real-time response on 1-2 core CPU's, or that the audio would sound like 
anything we can still recognize.

Jimmy








reply via email to

[Prev in Thread] Current Thread [Next in Thread]