[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Documentation for issues 3601 (based on 3581), 4042 (NR: "MIDI output",

From: Heikki Tauriainen
Subject: Documentation for issues 3601 (based on 3581), 4042 (NR: "MIDI output", Section 3.5)
Date: Sun, 10 Aug 2014 00:14:27 +0300


In response to
here's some new documentation for Section 3.5 ("MIDI output") of the
Notation Reference (v2.19.11).  The main goal of the proposed additions
is to describe the MIDI context properties implemented in issues 3581
(with issue 3601 about the missing documentation) and 4042.  (I'm the
author of the code patches in those issues.)

To support this goal, I'll also propose changes to some existing
sections of the documentation: these changes include an attempt to (at
least partially) document the Score.midiChannelMapping context property,
and an expanded description of MIDI channels.

The changes proposed (in plain text) to the documentation are
interleaved with my comments and questions.  If it'd be more useful to
format the actual changes in some other way, please tell me what I need
to do.  (I'm already sorry about any procedural mistakes I've probably
made here because the changes likely already qualify as "larger
contributions" mentioned in the Contributor's guide, however from the
previous discussion I understood that it would still be OK to just send
these suggestions to the bug list.  This is the first time I've ever
proposed any changes to the documentation...)

Best regards,
Heikki Tauriainen


Based on my understanding of the LilyPond implementation, the fourth
paragraph in Section 3.5 "MIDI output" ("The MIDI output allocates a
channel...") is not strictly accurate as the allocation of MIDI channels
depends on the value of the Score.midiChannelMapping context property:
using a separate channel for each staff is only the default setting.

As the MIDI section of the Notation Reference doesn't currently mention
the Score.midiChannelMapping context property (it is listed only in
Appendix A.17), which nevertheless plays a part in the allocation of
MIDI channels for MIDI output, I propose the following changes to the

Expand the fourth paragraph of Section 3.5 into a separate
subsection about MIDI channels, and move it between sections 3.5.1
("Creating MIDI files") and 3.5.2 ("MIDI Instruments").

  (Why place it after 3.5.1:) Section 3.5.1 gives all the
  information that is needed for generating MIDI files, 
  allowing users to already start experimenting with MIDI 
  without going (yet) into any technical details about the 
  structure of MIDI files in the documentation.

  (Why place it before 3.5.2:) The MIDI channel mapping 
  affects the behavior of all MIDI related context 
  properties (probably most importantly, the MIDI 
  instrument).  Assuming a basic understanding of the MIDI 
  channel mappings before describing any of the MIDI related
  context properties will help in documenting them in a more
  concise way.

Of course, these are just suggestions; feel free to ignore, edit, or
move around any of the proposed changes.

Move also the "Changing MIDI output to one channel per voice" in the
snippet of the current section 3.5.1 to the new subsection.

Proposed text for the new subsection:

3.5.x MIDI channels

When generating a MIDI file from a score, LilyPond will automatically
assign every note in the score a MIDI channel on which it should be
played on a MIDI device.  A MIDI channel has a number of controls
available to select, for example, the instrument used to play the notes
on that channel, or request the MIDI device to apply various effects to
the sound produced on the channel.  At all times, every control on a
MIDI channel can have only a single value assigned to it (which can be
modified, however, for example, to switch to another instrument in the
middle of a score). This section describes how LilyPond maps notes to
MIDI channels, and the options available to change the default behavior.

By default, LilyPond will try to reserve a separate MIDI channel for
every staff defined in a score: in other words, all notes in the voices
contained within the staff will share the same MIDI channel.  However,
because the MIDI standard supports only 16 channels per a MIDI device
(where one of the channels is reserved for drums - see Section
"Percussion in MIDI" for more information), this limit for the number of
channels limits also the number of notes which can be played, for
example, using different instruments at the same time.  Trying to use
too many MIDI channels will result in some of the existing channels
being reused, which may lead to output that does not match the

To work around the limitation in the maximum possible number of MIDI
channels, LilyPond supports a number of different modes for MIDI channel
allocation, selected using the Score.midiChannelMapping context
property.  This context property can be assigned one of the following

    Use a separate MIDI channel for every staff in the score
    (the default).  All notes in the voices contained within
    a staff will share the MIDI channel of their enclosing 

    Use a separate MIDI channel for every distinct MIDI
    instrument used in a staff in the score.  This means 
    that all notes played with the same MIDI instrument will
    share the same MIDI channel, even if the notes come from
    different voices or staves.  This setting may allow 
    improving the allocation of MIDI channels in scenarios 
    where the number of staves in a score exceeds the number
    of available MIDI channels, but the number of different 
    MIDI instruments still remains within this limit.

    Use a separate MIDI channel for each voice in the score 
    that has a unique name among the voices in its enclosing
    staff.  (Voices in different staves are always assigned 
    separate MIDI channels, but any two voices contained 
    within the same staff will share the same MIDI channel 
    if they have the same name.)

For example, the default MIDI channel mapping of a score can be changed
to the "instrument" setting by inserting the following \context block in
the \midi block of an input file:

\score {
   \midi {
     \context {
       midiChannelMapping = #'instrument


Comments and (open) questions:

The above description of the midiChannelMapping = #'voice setting is
really only an assumption based on the short description of the context
property in Appendix A.17 of the Notation Reference, and the C++ source
code in lily/staff-performer.cc (more precisely, the
Staff_performer::acknowledge_audio_element and the
Staff_performer::get_channel functions) - I've not used this mode myself
except for trying to verify the above description.

On the surface, this setting looks like it could lead to easily running
out of MIDI channels, but in the implementation there seems to be
special processing targeted to handle this case - it looks as if the
MIDI generation code will in this mode emit some additional MIDI events
to change the nature of MIDI tracks when generating MIDI files, possibly
(?) to allow reusing the same MIDI channel numbers for channels that
will in fact be independent.

Unfortunately, fully documenting any possible special allowances
admitted by this mode of operation (and the use case for which this mode
has been originally created) will probably need input from someone who
understands the implementation better than I do.

It is difficult to create a short realistic example which would
demonstrate the difference in LilyPond's behavior when using different
values for the Score.midiChannelMapping context property.  This would
probably require creating a score with enough (>15) staves to force some
of the MIDI channels to be reused with the default MIDI channel
allocation setting.  An alternative would be to add examples of using,
for example, the lilymidi tool to inspect the actual MIDI channel
allocation in a MIDI file obtained from a smaller input score, using
different values for the context property - however, introducing this
external tool here doesn't seem to fit well within the scope of this

Also the "Changing MIDI output to one channel per voice" snippet at the
end of (the current) Section 3.5.1 of the documentation has to do with
customizing the mapping from instruments to MIDI channels.  An obvious
question arises what kind of relation this method of modifying the MIDI
channel allocation has to the midiChannelMapping context property - for
example, could it be possible to rewrite the snippet using the context
property, without moving the Staff_performer between contexts?

I initially thought this could be possible, but, trying to edit the
snippet to make use of the context property, I wasn't successful in
reproducing the exact same end result using any of the values available
for the context property alone - that is, without also moving the
Staff_performer around while keeping both voices within the same staff.
As a result, I believe this snippet to still be relevant for its
intended purpose.

(Specifically, only setting Score.midiChannelMapping to #'voice does not
help to get the expected output since, as long as the Staff_performer
stays in the staff context, all instrument changes still apply to all
voices within the staff - this is what moving the Staff_performer will

One way to keep the snippet in the documentation would be to continue
the new section about MIDI channels (after introducing the
Score.midiChannelMapping context property) with the text

Multiple voices with separate MIDI channels in a single staff

In scenarios where a score has multiple voices that share the same staff
but still need separate MIDI channels (for example, because the voices
should use different MIDI instruments), none of the values available for
the Score.midiChannelMapping context property may be sufficient to
obtain correct MIDI channel allocation automatically for the voices.
The voices can nevertheless be given separate MIDI channels by moving
the Staff_performer from its default staff context to the voice context
as shown in the following code:

[the original snippet code here]

I should also mention that the snippet's end result could be achieved
without modifying any context properties, or without moving
Staff_performer around by storing the voices into macro variables, and
using a separate \score blocks for generating typeset output and MIDI:

flute = \new Voice \relative c''' {
  \key g \major
  \time 2/2
  r2 g-"Flute" ~
  g fis ~
  fis4 g8 fis e2 ~
  e4 d8 cis d2

clarinet = \new Voice \relative c'' {
  a2. b8 a
  g2. fis8 e
  fis2 r

\score {
  \new Staff <<
  \layout { }

\score {
    \new Staff \with { midiInstrument = #"flute" } \flute
    \new Staff \with { midiInstrument = #"clarinet" } \clarinet
  \midi {
    \tempo 2 = 72

The reason why I mention this is just my habit of usually trying to
avoid moving engravers or performers between contexts if at all possible
(to me, moving engravers or performers around always makes me feel like
I could be tampering with internal behavior that's not really supposed
to be modified without a full understanding about the implications of
the changes to the interaction between engravers and performers).
That's why I'd consider using two separate \score blocks a more easily
comprehensible and thus "safer" solution.

As the Score.midiChannelMapping context property fundamentally affects
the way LilyPond translates staves into tracks in a MIDI file, I don't
believe it makes sense to modify the value of this context property in
the middle of a score.  That is, I believe it is safest to set the
property at most once in every score (at the beginning of the score, or
by altering the \Score context in the \midi block as done in the
proposed documentation).


The "MIDI Instruments" section (original numbering 3.5.2)

If it's considered relevant here to relate changing the
Staff.midiInstrument context property to changes in MIDI channel
controls, the following text could be added before the "If the selected
instrument does not exactly match..." paragraph.

Depending on the value of the Score.midiChannelMapping context property,
the above LilyPond examples will either change the MIDI instrument of
the MIDI channel associated with notes in the current staff to the given
one (for values #'staff or #'voice for the Score.midiChannelMapping
context property), or change the MIDI channel used for notes in the
current staff to the one reserved for the given MIDI instrument (for
value #'instrument of the Score.midiChannelMapping context property).
In all cases, the end result is that all notes that occur at the current
or any later moment in the current staff will be played using the new
MIDI instrument until the next instrument change.


New section on additional MIDI context properties: because of the
similarity of their (internal) behavior with Staff.midiInstrument, the
description of these properties could fit either directly after (or even
into) the "MIDI instruments" section, or, if these context properties
are not likely to be needed as frequently as, for example, MIDI volume
settings (which is probably the case), the section could be placed near
the end of the MIDI section.

Context properties for MIDI effects

LilyPond supports also the following context properties which can be
used to apply various MIDI effects to notes played on the MIDI channel
associated with the current staff, voice, or MIDI instrument (depending
on the value of the Score.midiChannelMapping context property).
Changing these context properties will affect all notes played on the
channel after the change, however some of the effects may even apply
also to notes which are already playing (this depends on the
implementation of the MIDI output device).

The following context properties are supported:

    The pan position controls how the sound on a MIDI 
    channel is distributed between left and right stereo 
    outputs.  The context property accepts a number between 
    -1.0 (#LEFT) and 1.0 (#RIGHT); the value -1.0 will put 
    all sound power to the left stereo output (keeping the 
    right output silent), the value 0.0 (#CENTER) will 
    distribute the sound evenly between the left and right 
    stereo outputs, and the value 1.0 will move all sound
    to the right stereo output.  Values between -1.0 and 1.0
    can be used to obtain mixed distributions between left 
    and right stereo outputs.

    The stereo balance of a MIDI channel.  Similarly to the
    pan position, this context property accepts a number
    between -1.0 (#LEFT) and 1.0 (#RIGHT).

    Expression level (as a fraction of the maximum
    available level) to apply to a MIDI channel.  A MIDI 
    device combines the MIDI channel's expression level with
    a voice's current dynamic level (controlled using 
    constructs such as \p or \ff) to obtain the total volume
    of each note within the voice.  The expression control 
    can be used, for example, to implement crescendo or 
    decrescendo effects over single sustained notes (not 
    supported automatically by LilyPond).  [[** Add link to 
    the attached snippet, see below. **]]  The expression
    level ranges from 0.0 (no expression, meaning zero
    volume) to 1.0 (full expression).

    Reverb level (as a fraction of the maximum available 
    level) to apply to a MIDI channel.  This property 
    accepts numbers between 0.0 (no reverb) and 1.0 (full 

    Chorus level (as a fraction of the maximum available 
    level) to apply to a MIDI channel.  This property 
    accepts numbers between 0.0 (no chorus effect) and 1.0 
    (full effect).

Known issues:

As MIDI files do not contain any actual audio data, LilyPond only
translates changes in these context properties to requests for changing
MIDI channel controls in the outputted MIDI files.  Whether a particular
MIDI device (such as a software MIDI player) can actually handle any of
these requests in a MIDI file is entirely up to the implementation of
the device: a device may choose to ignore some or all of these requests.
Also, how a MIDI device will interpret different values for these
controls (generally, the MIDI standard fixes the behavior only at the
endpoints of the value range available for each control), and whether a
change in the value of a control will affect notes already playing on
that MIDI channel or not, is also specific to the MIDI device

When generating MIDI files, LilyPond will simply transform the
fractional values within each range linearly into values in a
corresponding (7-bit, or 14-bit for MIDI channel controls which support
fine resolution) integer range (0-127 or 0-32767, respectively),
rounding fractional values towards the nearest integer away from zero.
The converted integer values are stored as is in the generated MIDI
file.  Please consult the documentation of your MIDI device for
information about how the device interprets these values.


There's a LilyPond code snippet (written by Keith OHara)
attached to issue 3601 to demonstrate the use of some of
these context properties.

A LilyPond code snippet which defines a music function to help
implementing crescendo/decrescendo effects on sustained notes (or
actually during arbitrary music expressions) using the MIDI expression
control can be found as an attachment to this message.  (The snippet is
likely to be too big to be included in the Notation Reference.)

I've done my best to document the interface to the music function
defined in the snippet.  (However, I don't really like the
"dynamic-to-volume-function", and especially the mandatory
"minimum-absolute-volume" and "maximum-absolute-volume" arguments
myself.  I think the function should, instead of requiring the user of
the function to provide these, get the absolute volume function directly
from the Score.dynamicAbsoluteVolumeFunction context property, and
determine the absolute volume range from the
{Score,Staff}.midi{Minimum,Maximum}Volume context properties, or from
the current MIDI instrument using Score.instrumentEqualizer; however,
reading context properties to alter the behavior of music functions is
out of the scope of my LilyPond programming skills.  Any suggestions on
how the music function could be improved are welcome.)

If the context properties warrant additions also into the "Supported in
MIDI" and "Unsupported in MIDI" lists in the "What goes into MIDI
output?" section, I'd suggest the following changes:

* Add to the "Supported in MIDI" list a new item "Panning,
  balance, expression, reverb and chorus effects", with a
  link to the subsection about the context properties.

* Extend the the "Crescendi, decrescendi over a single
  note" item in the "Unsupported in MIDI" list with the
  remark "(however, see [[** Add link to the attached
  snippet **]])".

Attachment: adjust-expression.ly
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]