fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] libinstpatch integration?


From: Element Green
Subject: Re: [fluid-dev] libinstpatch integration?
Date: Sat, 7 Jun 2014 11:04:29 -0600

Hello David,

On Sat, Jun 7, 2014 at 1:48 AM, David Henningsson <address@hidden> wrote:


On 2014-06-07 01:13, Micah Fitch wrote:
So anyhow, I'm curious about this.  Right now I can't even have multiple
instances of a 300 MB soundfont without really taking up lots of
unnecessary memory.  It'd be nice to change that.

Actually, this has recently been fixed in FluidSynth, there hasn't yet been a release that has this feature though. I e, if you have two different FluidSynth instances that both load the same soundfont, they will share the soundfont memory.


P.S. Here's a totally off the wall idea...  to make an LV2 plugin based
on Swami.  That's kind of intriguing to me for some reason!  But would
probably be a good amount of work.

FWIW, I made some draft/alpha code for interfacing LV2 and FluidSynth a while ago, but I haven't got that far on it yet, the main reason being lack of time.

I'm not sure exactly how libInstPatch works, but I've been thinking that maybe one could just mmap the entire soundfont instead of loading it into RAM. Or possibly load the metadata into RAM, but not the sample data.


libInstPatch loads instrument files into trees of GObjects (example: SF2 with children SF2Preset, SF2Inst, SF2Sample and the first 2 have children SF2PZone, SF2IZone which reference SF2Inst and SF2Sample objects for each zone).  As far as FluidSynth is concerned, the plugin in Swami "renders" instrument objects into IpatchSF2VoiceCache objects, which contain a list of SoundFont oriented voices (in other words sample data with the calculated SoundFont generators) and the trigger criteria (MIDI note and velocity ranges).  The sample data is also "cached" into RAM.  This occurs at preset selection time.

When a note-on occurs from FluidSynth, the SoundFont loader API is used to instantiate the voices (based on the event criteria) with the cached sample data.

So in summary, libInstPatch loads the metadata at file load time and then incrementally loads sample data on demand and calculates the voice parameters (at preset selection time).  This ensures minimal note-on processing.

I would think simply mmapping the audio would cause drop-outs if attempting to stream directly from it, unless a separate pre-buffering thread was used (in which case it might not matter if its mmapped or not).  Streaming would be a nice feature, but I see it as being separate from on-demand loading of samples and likely full of its own pitfalls.  I know this is what LinuxSampler does and so I'm sure they are pretty versed with the issues involved.


With today's modern SSD drives with low seek time, it should perhaps be possible to load things directly from disk while keeping reasonably low latency. That's just an untested idea though.


True, SSD might have a better chance of streaming in realtime without much pre-buffering.  I still think one would need a separate pre-buffering thread though, especially since you don't want to be using OS calls from a RT thread.  It might just mean being able to get away with smaller buffers, without underruns.

 
// David



I see libInstPatch/Swami as being an easy platform to leverage off of to achieve a lot of neat things.  I don't necessarily suggest that libinstpatch becomes a required dependency though, so that it can still get built in a more stripped down form.

A benefit from having libInstPatch support, is that as more instrument formats are added to libInstPatch, FluidSynth would benefit directly from it, without any extra work (as long as the synthesis constructs of said format can be mapped to SoundFont centric parameters).

Best regards,

Element

reply via email to

[Prev in Thread] Current Thread [Next in Thread]