... where the first print would emit the string "foo" and the second print would emit the string "bar" as LilyPond interprets the input.
Does something like this exist?
[The motivation is that I'm researching the way that LilyPond determines which voice music events are assigned to ... from a user's perspective based solely on looking at input files. Voice-resolution in the example above is of course quite clear; voice-resolution when there are multiple anonymous voices, possibly in parallel, has become extremely tricky as I've written more and more test files, all of which I'll be happy to post if anyone else is interested in that sort of thing.]