fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] libinstpatch integration?


From: David Henningsson
Subject: Re: [fluid-dev] libinstpatch integration?
Date: Sun, 08 Jun 2014 10:50:19 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0



On 2014-06-07 19:04, Element Green wrote:
Hello David,

On Sat, Jun 7, 2014 at 1:48 AM, David Henningsson <address@hidden
<mailto:address@hidden>> wrote:



    On 2014-06-07 01:13, Micah Fitch wrote:

        So anyhow, I'm curious about this.  Right now I can't even have
        multiple
        instances of a 300 MB soundfont without really taking up lots of
        unnecessary memory.  It'd be nice to change that.


    Actually, this has recently been fixed in FluidSynth, there hasn't
    yet been a release that has this feature though. I e, if you have
    two different FluidSynth instances that both load the same
    soundfont, they will share the soundfont memory.


        P.S. Here's a totally off the wall idea...  to make an LV2
        plugin based
        on Swami.  That's kind of intriguing to me for some reason!  But
        would
        probably be a good amount of work.


    FWIW, I made some draft/alpha code for interfacing LV2 and
    FluidSynth a while ago, but I haven't got that far on it yet, the
    main reason being lack of time.

    I'm not sure exactly how libInstPatch works, but I've been thinking
    that maybe one could just mmap the entire soundfont instead of
    loading it into RAM. Or possibly load the metadata into RAM, but not
    the sample data.



libInstPatch loads instrument files into trees of GObjects (example: SF2
with children SF2Preset, SF2Inst, SF2Sample and the first 2 have
children SF2PZone, SF2IZone which reference SF2Inst and SF2Sample
objects for each zone).  As far as FluidSynth is concerned, the plugin
in Swami "renders" instrument objects into IpatchSF2VoiceCache objects,
which contain a list of SoundFont oriented voices (in other words sample
data with the calculated SoundFont generators) and the trigger criteria
(MIDI note and velocity ranges).  The sample data is also "cached" into
RAM.  This occurs at preset selection time.

When a note-on occurs from FluidSynth, the SoundFont loader API is used
to instantiate the voices (based on the event criteria) with the cached
sample data.

So in summary, libInstPatch loads the metadata at file load time and
then incrementally loads sample data on demand and calculates the voice
parameters (at preset selection time).  This ensures minimal note-on
processing.

Ok, thanks for the explanation. Is the note-on processing less than what's currently used in FluidSynth? If so one could consider moving some processing time from note-on time to preset selection time in FluidSynth as well. Not sure if this would be a good idea, because we want FluidSynth to do instrument switches fast too.

I would think simply mmapping the audio would cause drop-outs if
attempting to stream directly from it, unless a separate pre-buffering
thread was used (in which case it might not matter if its mmapped or
not).  Streaming would be a nice feature, but I see it as being separate
from on-demand loading of samples and likely full of its own pitfalls.
  I know this is what LinuxSampler does and so I'm sure they are pretty
versed with the issues involved.


    With today's modern SSD drives with low seek time, it should perhaps
    be possible to load things directly from disk while keeping
    reasonably low latency. That's just an untested idea though.


True, SSD might have a better chance of streaming in realtime without
much pre-buffering.  I still think one would need a separate
pre-buffering thread though, especially since you don't want to be using
OS calls from a RT thread.  It might just mean being able to get away
with smaller buffers, without underruns.

Yeah, I agree it's a risky bet. But in theory, the SSD seek time (of ~ 0.1 ms) is just less than what we usually have for our deadlines, which are in the 1 - 10 ms range for realtime operations. But still a lot of note-ons might happen at the same instant, so maybe polyphony needs to be reduced to compensate.

It'd just be interesting to try, that's all :-)

And well, for non-realtime operations (such as fast rendering), it'd be optimal just to mmap the file, so it would probably still make sense to implement.

I see libInstPatch/Swami as being an easy platform to leverage off of to
achieve a lot of neat things.  I don't necessarily suggest that
libinstpatch becomes a required dependency though, so that it can still
get built in a more stripped down form.

A benefit from having libInstPatch support, is that as more instrument
formats are added to libInstPatch, FluidSynth would benefit directly
from it, without any extra work (as long as the synthesis constructs of
said format can be mapped to SoundFont centric parameters).

You might need to educate me a little on the history here, how come that you made libInstPatch in the first place instead of writing the same functionality inside FluidSynth?

// David



reply via email to

[Prev in Thread] Current Thread [Next in Thread]