fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] improving musical timekeeping


From: address@hidden
Subject: Re: [fluid-dev] improving musical timekeeping
Date: Sun, 9 Feb 2020 11:04:54 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.4.2

2020.02.09. 9:26 keltezéssel, Tom M. írta:
"FluidSynth is a real-time software synthesizer based on the Soundfont
2 specification"

This sets our scope. If you need a more advanced synth model, have a
look at SFZ.

Thank you. I do not see the conceptual reasoning behind limiting FluidSynth to SF2 (I do understand there could be a resource/effort issue), but if that is cast in stone, then it looks like that's what I'll have to do then.

Which then makes everything I write below academic.

Thanks for your clarification. But given your very unique and personal
use-case, I do not see how it can be implemented in a synthesizer like
fluidsynth.

I am not sure my use case would be very unique and personal - perhaps I'm just not describing it properly.

I would like you to consider NotePerformer, which was created to achieve this result (and a lot more). Listen to his demos, all achieved automatically. It is even commented, that while most sample libraries require laborious manual adjustment of timing and switching samples, this software does most of it automatically, getting it very close in most of the cases.

Or consider the humanize feature of many scoring applications, intended to automatically get closer to how human musicians would render a piece from the mathematically perfect grid of a score.

Those are commercial products, so I think it is reasonable to assume that they fulfill an existing need.

SoundFont - as it stands today - is quite tone-deaf to these issues, it seems to me from all of your responses. But SoundFont is not an actively maintained (or shall I say, actively unmaintained) standard, so it's future is open, and could be enhanced. 20 or so years have passed, with the improvements in technology much more could be afforded in computer music today. I campaign for moving forward.

There is an insane amount of time and effort going into fighting with the limitations of the SoundFont2 standard, trying to shoehorn a reasonable "Klang" (sound) into it (there are no musically meaningful note phases, no natural decay of overtones unless full-length samples are used, stochastic elements are not modelled properly, timbre changes due to velocity are not modeled, note releases are not modeled, lack of proper legato, etc.), and the results are often still unsatisfactory. Yet the suffering is perpetuated in a vicious circle by SoundFont2 being the most widespread and supported standard, and hence the lowest common denominator people aim for. Commercial alternatives try to overcome these limits by sheer force (huge sample libraries) and/or advancing the synthesis models. But they are commercial, so they will never become universally adopted. In the meantime popular/commercial music continues to dive into a pit of mud, because the masses trying their hands at making music end up having to fight the tools ("It is (a) well known (problem), that MIDI sheets only work well with the soundbank they have been designed for." - quote from Tom), instead of being able to play with and write music to educate themselves. So unless they intend to base a career on it, they either give up, or go to EDM (electronic dance music) that is a genre invented to market computer music despite its musical flaws. And since it got popular and established an industry, now everyone is stuck in that tarpit, and non-EDM music made on a computer usually sounds bad. One could argue "oh, but you could always go and study classical, noone forces you to do computer music and suffer its consequences" - yes, but that is a lot of time and effort (and money for instruments, tuition, etc.), so it is not for the masses, hence it will not raise the bar of musical taste of the masses.

Hence we are back to no good, self-made, self-study acoustic music for the masses. Because we are still stuck with 90s (inadequate) sampling technology, and it is still everywhere, keeping people back from enjoying studying and writing (non-EDM) music.

This is why I think there is a need to raise the bar.

(Pls. note that you had two other replies, which you didn't quote, so
you might have missed them.)

Thank you, indeed I did miss them, as they don't seem to have arrived to my inbox.

Marcus Weseloh writes:

In some musical contexts it might be correct to say that you want the end of the attack phase exactly on the beat. In other musical contexts you might want the beginning of the attack phase on the beat. Yet another context might want the middle of the attack phase on beat.

I don't consider this as a conceptual obstacle. Indeed, e.g. playing pizzicato should be timed differently (hit on the beat) than playing long notes (more start on the beat). So then let's create separate samples for pizzicato and for long notes (this would need to be done for a realistic sound anyway), and mark their musical onset position accordingly (or invent and add multile such markers if needed), and then let some controller (program change or other) switch or interpolate between these timing points. In my mind these points are as tightly coupled to the sample, as loop points are; hence they, too, do belong into the realm of the soundfont creator artist to set them, and the sampler/synthesizer to reproduce them. But if you don't have an attack phase marked at all (because the technology is limited and does not allow it to be marked and used), then you as a composer are prevented from being able to express such meaningful musical intents (and they already are in scoring/notation systems, so the computer would just have to act on it) - instead, you are relegated to having to manually fiddle with "milliseconds" or "ticks".

So what you describe as a flaw - that synths ignore the meaning of the note onset of samples - is actually a feature. It gives musicians a consistent and predictable system that they can use to add meaning themselves.
Considering how these timings could change from soundfont to soundfont, I do not consider them to be consistent, nor predictable. To overexaggerate this: why don't we apply the same logic to loop points, and consider those to be the responsibility of the composer too? After all, the repeating loop length gives a rhythm/beat to the sound of the sample, which ideally should be coordinated with the tempo and events in the musical piece in order not to interfere/clash with them (let alone the fact that when a sample is used for multiple pitches, and gets transposed during playback, necessarily ends up changing the beat of the loop, further muddying up the sound).

Reinhold Hoffmann wirtes:

I totally agree to what Marcus says. It is up to the musician/composer and the style of music.
So sure, make it controllable by the composer. But the same way you do not expect the composer to specify sample rates in Hz for resampling samples to make them sound at the desired musical note/pitch, but you abstract that away into the SoundFont and allow the composer to specify the desired musical note diretly (and the synthesizer calculates the necessary sample rate conversions automatically to give the desired result), why do you expect them to have to manually worry about such a low-level detail coupled so tightly to individual samples?

I think that the required feature can only be created by a musician or a composer (e.g. by recording and using the necessary protocol elements) rather than by a synthesizer “afterwards”.
Just like notes (pitches) and rhythm - but you don't force them to do that in hertzs or milliseconds or samples. Why do you want to force them here? And then why don't you force them to specify loop points as well? Why do you consider one to be inherently coupled to the sample, but not the other? I fail to see the distinction. Yes, loop points the synthesizer is kinda conceptually forced to automatically act on (lest we want a note to end prematurely) and I guess not alterable by the composer, while the timing points I suggest could be controlled to be switched/interpolated between, but where those timing points are along the sample is as inherently coupled to that sample, as the position of loop points.

- HuBandiT



reply via email to

[Prev in Thread] Current Thread [Next in Thread]