Tuesday, October 6, 2015

Elm and Web-MIDI

This post is about attempting to learn two new technologies and one old one.  The two new ones are Web-MIDI (which allows you to plug MIDI devices into your computer and do things with music in the browser) and the Elm programming language (which promises at last to bring some coherence to the challenge of writing web applications using the witches brew which is HTML, CSS and JavaScript).  The old one is in fact JavaScript itself - I've always avoided it like the plague but now I feel there is a reason for getting to grips with it.

Update for Elm 0.17

Now that Elm 0.17 has been released, the description of elm in this post no longer applies. Signals have been removed, the rules are starting to change for writing native modules.  From today (May 14th), I have deprecated elm-webmidi.

Web-MIDI

Considering how long MIDI has been in existence, you would think that handling it in browsers would be second-nature by now, but sadly this is not the case. Working draft 17 of the Web MIDI API was only published in March of this year, and at the time of writing, only Chrome has an implementation.  It is not a large document.  The most important features are illustrated by these two functions:
   
    function midiConnect () {
      // request MIDI access and then connect
      if (navigator.requestMIDIAccess) {
         navigator.requestMIDIAccess().then(onMIDISuccess)
      } 
    }

    // Set up all the signals we expect if MIDI is supported
    function onMIDISuccess(midiAccess) {
        var inputs = midiAccess.inputs.values();       

        // loop over any register inputs and listen for data on each
        midiAccess.inputs.forEach( function( input, id, inputMap ) {   
          registerInput(input);       
          input.onmidimessage = onMIDIMessage;     
        });      

        // listen for connect/disconnect message
        midiAccess.onstatechange = onStateChange;
    }  
The requestMIDIAccess function detects whether MIDI is supported in the browser and then hands control to an asynchronous function onMIDISuccess if it finds there is support. This allows you to discover all the MIDI input devices that are connected, to register them and also register a callback which will respond to MIDI messages provided by that device (for example key presses on a keyboard). You can handle MIDI output devices the same way, but I will not cover that here. Finally you can register another callback that listens to connection or disconnection messages as devices are unplugged or plugged back in to your computer.

Elm 0.16

Elm is a Functional Reactive Programming language.  It replaces the traditional callbacks used by JavaScript with the concept of a Signal. Such a signal might be, for instance, a succession of mouse clicks or keyboard presses - in other words it represents a stream of input values over the passage of time. What Elm forces you to do is to merge all the signals that you encounter in your program and then it routes this composite signal to one central place.  Here, a single function, foldp, operates which has the following signature:
   
   foldp : (a -> s -> s) -> s -> Signal a -> Signal s
This is a bit like a traditional fold, except it operates over time. It takes three parameters - (1) a function that knows how to update global state from an incoming value, (2) the current state and (3) an incoming signal - and then it composes all these together so that you get a signal for the overall state. Whereas the traditional JavaScript model would have you deal with a set of individual callbacks which would operate on the global state of your program in often incomprehensible ways (because it is so difficult to reason about when you're in the middle of callback hell), the Elm model simply requires you to hold global state and refresh it completely each time any signal comes in. That this approach doesn't slow reactivity down to a crawl is due to one thing - Virtual DOM. An abstract version of DOM is built rather than writing it directly, and this comes with clever diffing algorithms so that when you want to view your state as HTML, only a small amount of rewriting needs to occur.

In other respects, Elm Syntax is very like Haskell, but with occasional borrowings from F# for its composition operators. What is lacking, though, is Typeclasses. This means, for example, that you can't just use map to operate on lists - you have to preface is as List.map because Elm can't distinguish it from others such as Signal.map.

Elm-WebMidi

To build a MIDI library for Elm, you have to write 'Native' JavaScript code which takes each of the callbacks described earlier and turns them into Elm signals.  I'll say a little more about how this is done later on, but for now, assume that there are three separate signals with the following type signatures:
   
   -- a connection signal
   Signal MidiConnect

   -- a disconnection signal
   Signal MidiDisconnect

   -- a note signal
   Signal MidiNote
The data types MidiConnect, MidiDisconnect and MidiNote are simply tuples that gather together the appropriate attributes from the Web-MIDI interface. MidiConnect signals are emitted by the onStateChange callback for a new connection, but they are also emitted when the web application starts up if there happen to be any devices already attached. The library allows us to write an application which lists the various devices such as MIDI keyboards as they appear and disappear and which also displays each key as it is pressed alongside its parent device.

Anatomy of an Elm Application

This sort of application is perhaps slightly simpler than other sample applications that you see on the Elm examples page because there is no direct user interaction with any widgets in the HTML view - all interaction is via the MIDI device.  It uses a standard MVC pattern. The first step is to gather together each of the three input signals. A MidiMessage algebraic data type is used to represent this disjunction, each Signal is mapped to this common type and then the Signals are joined together with Elm's mergeMany function.
   
   type MidiMessage = MC MidiConnect | MN MidiNote | MD MidiDisconnect

   -- Merged signals
   notes : Signal MidiMessage
   notes = Signal.map MN midiNoteS

   inputs : Signal MidiMessage
   inputs = Signal.map MC midiInputS

   disconnects : Signal MidiMessage
   disconnects = Signal.map MD midiDisconnectS

   midiMessages : Signal MidiMessage
   midiMessages = mergeMany [inputs, notes, disconnects]

We then need a model to represent the global state that we wish to keep. This is merely a list of input devices, and associated with each one is an optional MIDI note:
   
   -- Model
   type alias MidiInputState = 
     { midiInput: MidiConnect
     , noteM: Maybe MidiNote
     }

   type alias MidiState = List MidiInputState

and, of course, we need a view of this state. Elm's HTML primitives help to keep this terse:
   
   -- VIEW
   viewNote : MidiNote -> String
   viewNote mn = "noteOn:" ++ (toString mn.noteOn) ++ ",pitch:" ++ 
                 (toString mn.pitch) ++ ",velocity:" ++ (toString mn.velocity)

   viewPortAndNote : MidiInputState -> Html
   viewPortAndNote mis = 
     case mis.noteM of 
       Nothing ->
          li [] [ text mis.midiInput.name]
       Just min ->
          li [] [ text ( mis.midiInput.name ++ ": " ++ (viewNote min)) ]

   view : MidiState -> Html
   view ms =
     div []
       [ let inputs = List.map viewPortAndNote ms
         in ul [] inputs
       ] 
The main program applies the foldp function to produce each new state, and displays it with the view function. The initial state is just the empty list:
   
   -- Main
   midiState : Signal MidiState
   midiState = Signal.foldp stepMidi initialState midiMessages

   main : Signal Html
   main = Signal.map view midiState
All that's left to describe is the stepMidi function that recomputes the global state as each signal arrives. It deconstructs the signal into its original components using pattern-matching:
  
   stepMidi : MidiMessage -> MidiState -> MidiState
   stepMidi mm ms = 
      case mm of 
        -- an incoming MIDI input connection - add it to the list
        MC midiConnect -> 
           { midiInput = midiConnect, noteM = Nothing } :: ms
        -- an incoming note - find the appropriate MIDI input id, add the note to it
        MN midiNote ->
           let updateInputState inputState =
             if midiNote.sourceId == inputState.midiInput.id 
               then 
                 { inputState | noteM <- Just midiNote }
            else 
              inputState          
        in
           List.map updateInputState ms     
        -- a disconnect of an existing input - remove it from the list
        MD midiDisconnect ->
           List.filter (\is -> is.midiInput.id /= midiDisconnect.id) ms

Writing a Native Elm Module

There seems, as yet, to be very little documentation about how to go about this. The best approach is probably to look through the core Elm libraries on Github and adopt the conventions that these exemplify. You will need to make use of the common Runtime JavaScript that Elm will pass you and which allows access to the core features - for example List and Signal. In the Elm-WebMidi library, I made use of two main features. Firstly, Elm tuples are simply JavaScript objects with a discriminator labelled 'ctor' with the value (say) '_Tuple5' for a 5-member tuple. Secondly, signals can be built simply by using Elm.Native.Signal.make. The JavaScript then returns an object containing these three signals. Alongside the JavaScript, you need an Elm file that redefines this interface in Elm terms, but uses the JavaScript implementation. If you are interested, the Elm-WebMidi library and sample program can be found here.

Saturday, April 18, 2015

Reverse Engineering MIDI

I am very keen to expand the number of Scandi tunes that are saved to my tradtunedb site but I am finding that not enough people are posting tunes - largely because they are put off by the seeming complexity of the abc notation. One of my friends told me they'd find it a lot simpler if they could just play the tune on a MIDI keyboard and somehow get this automatically converted to abc. This got me thinking...

The Haskell School of Music

And then, by chance, I stumbled upon the Haskell School of Music (HSoM). This is a very comprehensive Haskell tutorial, chock-full of exercises, but where all the examples are taken from the field of music. It's the brainchild of Paul Hudak who is both one of the original designers of Haskell and also a keen musician. The book is a successor to his previous Haskell School of Expression, but to my mind it is a great improvement, partly because the treatment of the language is both clearer and deeper and partly because the exercises benefit from the common theme. Although HSoM is still very much a work in progress, it is remarkably comprehensive. It is split into two principal sections - the first part develops a domain-specific language for representing pieces of music and the second explores the generation, composition and analysis of musical signals which would allow you, for example, to design your own electronic instrument. All this is achieved by gradually introducing Euterpea, a computer music library developed in Haskell which supports the programming of computer music at both at the note level and the signal level.

Euterpea

Euterpea stems from a previous library also developed by Paul called Haskore and is maintained on github. It has at its core the Music algebraic data type:
    
data Music a  = 
       Prim (Primitive a)               --  primitive value 
    |  Music a :+: Music a              --  sequential composition
    |  Music a :=: Music a              --  parallel composition
    |  Modify Control (Music a)         --  modifier
  deriving (Show, Eq, Ord)
where Control is represented like this:
 
data Control =
          Tempo       Rational           --  scale the tempo
       |  Transpose   AbsPitch           --  transposition
       |  Instrument  InstrumentName     --  instrument label
       |  Phrase      [PhraseAttribute]  --  phrase attributes
       |  Player      PlayerName         --  player label
       |  KeySig      PitchClass Mode    --  key signature and mode
  deriving (Show, Eq, Ord)
The Control type allows you to insert a variety of modifying instructions - usually at the phrase level (for example you can transpose a tune, pick an instrument or indicate dynamic markings) but otherwise Music is extremely straightforward. Primitives represent the notes (or rests) themselves and you can compose phrases together either serially or in parallel. This is simple but powerful - for example if you compose individual notes in parallel, you get a chord, if you compose whole phrases of notes in parallel you can define different melodic lines, perhaps played on different MIDI instruments.

What is particularly useful is that Euterpea comes with functions to convert between MIDI and this Music data type. This is a good deal more attractive to work with - all you really get from MIDI is an instruction to turn a note on in a particular manner and then later to turn it off again. Euterpea manages the conversion by prefacing each note in the tune with a rest whose length is identical to the offset of the note in the tune and then composing all these two-item phrases in parallel. It thus becomes relatively easy, when trying to produce scores, to identify the notes that start each bar, although no bar indications are present in the Music data type itself.

As yet, Euterpea provides no help at all for producing a score of any kind from Music. It has a notion of a function that would provide notation called NotateFun but this is unimplemented.

Producing Scores

When you want to produce a performance of some kind from Music, things are relatively straightforward. Music is expressive enough to combine different notes together in any manner you wish and Control allows you to plug in your own modifiers, letting you express your own interpretation of the performance. But when you want to go in the opposite direction, things get trickier because the translation into MIDI is lossy - you lose nearly all the contextual information originally applied to phrases.

Accordingly, I don't want to be too ambitious in trying to recreate an abc score. I will limit myself to monophonic MIDI files and to relatively straightforward Scandi tunes with just a single melody line. On the whole, these tend to be in standard rhythms but the most prevalent is the polska. These are normally written in 3/4 time but are not waltzes - they have an emphasis on the first and third beats of the bar. They come in various forms: the slängpolska is straightforward, dividing each beat into semiquavers:
the triplet polska, as its name suggests, tends to divide each beat into triplets.
You would think that 9/8 would be a better representation (as in Irish slip jigs) but by convention, 3/4 is normally used. This means that if you offer the choice of time signature, you have more work to do in the translation of these polskas into 3/4 because you have to invoke the special abc triplet notation which is used whenever three notes take the time allotted to two. This must also be done for another very common polska form - the so-called short first beat polska where three notes are played as a regular triplet lasting the first full two beats in the bar.

Representing Scores

Scores will be represented in an algebraic data type Score:
    
data Score a = EndScore
             | Bar Int (Notes a) (Score a)
        deriving (Show, Eq, Ord)

data Notes a = PrimNote a
             | (Notes a) :+++: (Notes a)    -- a group of notes
             | Phrase (Tuplet a)            -- a duplet, triplet or quadruplet
        deriving (Show, Eq, Ord)

-- here Rational defines the type of Tuplet - 
-- (2/3) is two notes in the time of three (duplet) 
-- (3/2) is three notes in the time of two (triplet) 
-- (4/3) is four notes in the time of three (quadruplet) 
data Tuplet a = Tuplet Rational [a]
        deriving (Show, Eq, Ord)
As with Euterpea, it is polymorphic in the type of note being represented, allowing you to start with Euterpea's representation and end with one more suited to abc. Although very simple, it is sufficient to represent the set of notes in an abc score given the restrictions mentioned above - so for example I have dispensed with the parallel constructor because I am only interested in single line melodies. Other properties of the score such as time signature or key signature are carried by abc as headers and so are represented separately - simply as configuration properties.

Imposing Structure

Transformation from MIDI to abc is now a matter of attempting to apply more and more structure to the set of raw notes that you start with. Here are some of the key elements:

Note Duration

Euterpea uses fractional durations but abc uses integral durations. It's sensible to unify on a smallest duration of 1/96th note. This is convenient because it is small enough not to lose precision but has both 3 and 4 as factors and so can be used to represent notes in triplets and quadruplets. A bar of 4/4 music will occupy 96 such measures and we can deduce the length of the smallest note we can reliably detect (for example a 1/32 note occupies 3 measures) which we can call the shortest detectable note.

Bar Lines

MIDI has a notion of time signature and from this and the rounded note durations and offsets we can work out where the bar lines are intended and thus invoke the Bar constructor. If a note spreads across such a bar line, we have to split it into two notes linked with a tie, itself notated as a note type. We can then label all the bars in the score monotonically from zero. This also gives us a mechanism for issuing end of line markers to spread the score out evenly if we issue them regularly after a certain count of bars. We can also work out where the beats in the music occur and mark each note as either on or off the beat. This helps us to separate note phrases in the abc.

Long Notes

When we unify a note's duration, we may find it has a length (say) of (5/8) or (7/8). This is impossible to notate as a single entity and so we again split into two notes which we now can notate, joined by a tie.

Tuplets

If a note does not consist of an exact number of shortest detectable note durations, it is a candidate for embedding in a tuplet. This is true for quadruplets (having a note duration of 3/32) and triplets (having a note duration of 1/12). In addition, duplet notes have a duration of 3/16. We then continue to add neighbouring notes to the tuplet until the total duration is equal to that of an even number of beats.

Pitches

MIDI is specific about pitches - F# is always F#. However, its display in a score depends on the key signature. In the key of C Major it would be shown as F# but in the key of G Major it would be shown simply as F, inheriting its 'sharpness' from the key signature. Conversely, an F natural note in this key is required to be explicitly marked as a natural. To handle this translation it seems sensible to generate a chromatic scale of notes for each possible key and then to translate simply by lookup into this list. MIDI also has a notion of octave which can be directly translated into an abc octave marker.

You also need to pay attention to the way accidentals are represented in the score. Once an accidental is marked in any particular bar, you no longer need to mark further instances of the note explicitly that occur later in the bar, because they inherit their pitch markers from the previous instance.

Articulation

MIDI has no concept of rests, which only exist as gaps between successive notes. This means we need a heuristic which will somehow discriminate between cases where a note decays earlier than intended and where a legitimate rest is indeed intended. Our approach is to identify all such gaps, and where the duration is longer than the shortest detectable note, to insert a rest, otherwise to extend the preceding note by the gap's duration.

Code

The first phase of the project uses MIDI files that themselves were computer-generated (in fact from abc) and so are very regular in rhythm. If you are at all interested, the code is here. A web interface to the midi translation is here.