One more question. Am I correct in assuming the interpolation is done prior to dithering?
Thanks for the added into. The "before it's rendered" means before it's sent to the DAC for rendering.
When using the "-a file" option, does it store the results after fluid_synth_write_s16?
The clue may be the dither. If you recall the scope diagrams I sent last week, the "clean" sine-wave uses interp 1, and the "dirty" one with the staircase is with interp 0. Now linear-interpolation can't possibly yield a "clean" sine-wave. The best it can do is round out the edges at each staircase transitions, but it hardly makes any kind of audible difference--still sounds awful on my system.
I see you have a fairly large value for DITHER and it takes up a substantial amount of memory (48000*2 floating values in rand_table) That eats up a lot of an embedded Flash. I think I can do it with using a simple pseudo-random generator and int16_t or possibly modulate the audio out clock. But I'll need to study the FluidSynth code more carefully.
I do have the post filter set to max, and reverb/chorus disabled.
--Brad
On Sun, Jan 3, 2016, at 05:52 PM, Element Green wrote:
FluidSynth currently only supports 16 bit audio sample data, such as in the SF2 format.
There are several functions which are used for synthesizing the output, in particular:
fluid_synth_nwrite_float
fluid_synth_process
fluid_synth_write_float
fluid_synth_write_s16
The last one in particular outputs 16 bit audio and performs dithering when converting to signed 16 bit values. I'm not sure if that is what you are observing or not though. FluidSynth does have a low pass filter, reverb and chorus effects, so perhaps that is affecting the output you are seeing? I would turn off the reverb and chorus affects at least and perhaps disable the filter by setting the cutoff frequency to its highest value and the Q value to the lowest value for the instrument you are testing and make sure there is no modulation of the filter cutoff occurring with the envelope or oscillators.
As far as rendering the audio to a file, you could use the "-a file" option for "real time" rendering or the -F option for faster than realtime rendering of a MIDI file with the fluidsynth shell. Not sure if that will get you want you want though, since I don't fully understand what you mean by "before it's rendered".
Element
Regarding the previous post...
It seems this is done in the function fluid_rvoice_buffers_mix in
the file "fluid_rvoice.c."
The output is buf[] and is also type fluid_real_t.
So I'm assuming the rendering engine can accept floating point
values.
I'm still trying to see how the linear interpolation is able to
remove most of the low frequency artifacts. none of the simulations
and experiments don't result in the output I'm seeing from
FluidSynth.
Perhaps the audio h/w has additional filtering functions.
Is there a way I can save an output wave to a file? I want to look
at the audio data before it's rendered.
Thanks,
Brad
On 01/03/2016 01:27 PM, Brad Stewart
wrote:
The
interpolation routines place the processed data into dsp_buf[] of
type fluid_real_t.
I'm assuming you eventually convert this to int16 for rendering.
Can you point me to the section of the code that does the
conversion?
Are you doing anything else, such as a moving-averaging filter?
Thanks,
Brad
_______________________________________________
fluid-dev mailing list
_______________________________________________
fluid-dev mailing list
--
Brad Stewart
address@hidden