lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Is it time to update the Finale 2008 sample in Essay?


From: Abraham Lee
Subject: Re: Is it time to update the Finale 2008 sample in Essay?
Date: Mon, 20 Mar 2023 15:58:45 -0600

On Mon, Mar 20, 2023 at 3:25 PM Jean Abou Samra <jean@abou-samra.fr> wrote:

> Le lundi 20 mars 2023 à 10:26 -0600, Abraham Lee a écrit :
>
> Thanks, Jean, for doing that. I was hoping for a more public discussion to
> see if creating an issue is even warranted. The essay is a historical
> document, to be sure, so updating the comparison files might not be needed
> at all. It just feels a bit odd to read "we have chosen Finale 2008, which
> is one of the most popular commercial score writers". This was absolutely
> true... once upon a time. Reading it now makes it sound like we had to dig
> waaaaaaaaay back in order to pretend to make it seem like Finale isn't good
> enough and that LilyPond does it right. How does
> Finale/Sibelius/Dorico/etc. do nowadays? Do they get it right now? I'm
> certain folks have asked this question.
>
> For comparison, I just entered the two systems in the essay into MuseScore
> 4 and got a practically perfect output. Entering one voice at a time (voice
> 1, then voice 2), all existing pitches were maintained in voice 1 despite
> making alterations in voice 2 (like that omitted flat that Finale 2008
> leaves out). I didn't have to correct or add anything that was missing.
> Maybe I just got lucky because of how I entered the passage. Arguments can
> be made about other layout decisions, but I think it's hard to argue
> against what MS4 has done compared to the hand-engraved examples:
>
> So, maybe all that's needed is a different wording in this section to
> reflect why *at the time* this comparison made sense (like what is
> described at the beginning of the essay)? That would certainly be simpler
> than recreating the comparison (which might not come to the original
> conclusion like it used to).
>
> LilyPond too has evolved a lot in 15 years. You could take a more complex
> example than this relatively simple (in terms of notation) Bach excerpt,
> and re-do the comparison. I'm not sure Finale/Sibelius/Dorico/MuseScore
> would use skylines for spacing objects as opposed to simple boxes.
>
It most certainly has, in so many excellent ways. I've used all of these
major apps to some degree over the past number of years and I have
discovered there are many things about how each app lays out a page that
really frustrate me, some completely hiding the control to force things
onto a specific page/system/etc. This is one big reason I continue to use
LP after all these years. The layout control is simply superb! My only
complaint here is that there isn't a great mechanism to finely control the
system placement on a page (or staves/lyrics/etc. within a system) aside
from explicit vertical placement, which I avoid completely. I wish there
was a more convenient way to do a similar thing like \once \override
TextScript.X-offset = #5 to shift a grob and then have things re-flow
around it (as opposed to what extra-offset does). Sorry, off on a tangent
there.

I read that a while ago, Urs Liska organized a "music engraving contest"
> where scores were compared in different score writers, probably for the
> scores of beauty blog. That blog is now defunct, but you could try to dig
> into the list archives. It's old, but not 15 years old, and the chosen
> samples could be recompared today.
>
Yes, those were good times lol. I actively participated in those, partially
from the sidelines, but some on the front lines. Those contests were
difficult to run in a controlled way. Meaning, what is actually being
compared? How do you know who wins? How "out of the box" are we comparing
other software to LP? With LP, it's easy, just don't use any overrides and
you get default behavior. In other apps, the way things show up are very
dependent on how the entry takes place. And an expert user of software X
can do just as well a job as an expert user of software Y. So, what
actually is being compared in the end? Time to get to a specific result?
One user's ability to use their favorite software against another's? This
seemed to be the challenge we ran into because these were always the
questions that came up. Complaints one way or the other were always about
nuanced differences in output or expectations rather than gross errors by
the application despite a user's best effort. I'm not against this, but I'm
not sure how to make this a practically useful activity, either.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]