lilypond-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: MusicXML project platform


From: pls
Subject: Re: MusicXML project platform
Date: Sat, 27 Apr 2013 12:01:52 +0200

Hey all,

in case you hadn't noticed: last year at the Waltrop meeting Julien Lerouge started to work on a LilyPond to MusicXML converter using LilyPond-engravers (based on Jan Nieuwenhuizen's to-xml.scm script and some of John Mandereau's ideas (see http://lists.gnu.org/archive/html/lilypond-devel/2012-08/msg00651.html). It's still a stub but it's a start: https://github.com/Philomelos/lilypond-ly2musicxml.

FWIW it might also be an option to think about a music21-LilyPond interface. http://mit.edu/music21/ (Python toolkit for computer-aided musicology) can already import quite a few musical data formats and exports MusicXML (and LilyPond)…

hth
patrick
On 27.04.2013, at 01:30, Paul Morris <address@hidden> wrote:

On Apr 26, 2013, at 4:31 PM, Curt <address@hidden> wrote:

My own sense after reviewing the discussions I've been able to find:

I suspect that focusing on music-stream export first is too "half-a-loaf" and would require major rework once we add positioning.  And, that re-parsing the input file for positioning information is not as good an option as figuring out how to hook export logic into the lilypond positioning logic itself.  It seems the right way is to make the positioning logic maintain knowledge of music-stream.  

I hear your concern, but as I understand it the music stream is the closest that the music data in LilyPond gets to the structure and level of abstraction of a MusicXML file.  (MusicXML 1.0 included no positioning/layout info, which was then added at 2.0, so in a sense we could follow that progression as well, targeting 1.0 first.)  So that is why people who know the internals of LilyPond are recommending this approach.  

Especially, Han-Wen: "you would probably have to use some sort of dual approach, where
you store the stream, label each event, and then you trace back the grobs
during output stage to their originating events. You can use the ordering
from the stream to output the XML elements in the correct order."

I take it that the music stream has the order that is needed for the XML file, and that order is no longer present once the stream is converted to grobs and their positioning is determined.  So use the stream's order as the framework, and then trace the grobs to their originating events in the stream in order to add positioning after it is determined.[1]

This approach has the big advantage that you can do a first iteration without the positioning info and get a lot more benefit for a lot less effort.  And one of the biggest constraints here is the volunteer labor force when there are a lot of other areas that are being worked on by only a few people.

One of the key variables is how important is having the positioning data?  In my mind it is not really that important relative to having an export of the basic musical data.  Positioning is icing on the cake.  In some cases it may not be used at all, as with a publisher who will do their own custom positioning anyway, or in Braille notation.

That's my understanding at least, but I'm no expert.  At any rate, I imagine that your insights in organizing, identifying dependencies, tasks, etc, might help.

Cheers,
-Paul

[1] After the music stream data goes to the various engravers and is converted into graphics (grobs).  So a note from the input file that has a pitch and duration is converted into separate grobs of a notehead, a stem, a flag, etc, and only at that point is the positioning determined in relation to other grobs.  Hence as Mike Solomon said: "there is not a one-to-one correspondence between events in the music stream and musical objects created."  





_______________________________________________
lilypond-user mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/lilypond-user


reply via email to

[Prev in Thread] Current Thread [Next in Thread]