It’s been a long time coming, but MIDI now officially has added MPE and “capability inquiry,” opening up new expression and automatic configuration.

MIDI, of course, is the lingua franca of music gear. AKA “Musical Instrument Digital Interface,” the protocol first developed in the early 80s and has been a common feature on computers and gear and quite a few oddball applications ever since. And it’s a bit of a myth that MIDI itself hasn’t changed since its 80s iteration. Part of that impression is because MIDI has remained backwards compatible, meaning changes haven’t been disruptive. But admittedly, the other reason musicians think about MIDI in this way is that the stuff they most use indeed has remained fairly unchanged.

Engineers and musicians alike have clamored for expanded resolution and functionality ever since MIDI’s adoption. The announcements made by the MIDI Manufacturers Association aren’t what has commonly been called “HD MIDI” – that is, you don’t get any big changes to the way data is transmitted. But the announcements are significant nonetheless, because they make official stuff you can use in real musical applications, and they demonstrate the MMA can ratify official changes (with big hardware maker partners onboard). Oh, and they’re really cool.

Standardizing on new expressive ways of playing

First, there’s MIDI Polyphonic Expression, aka MPE. The name says it all: it allows you to add additional expression to more than one note at a time. So, you’ve always been able to layer expression on a single note – via aftertouch, for instance – but now instead of just one note and one finger, an instrument can respond to multiple notes and multiple fingers independently. That means every fingertip on an instrument like the ROLI Seaboard can squish and bend, and a connected sound instrument can respond or a DAW can record the results.

Hardware has found ways of hacking in this support, and plug-ins that require complex per-note information (think orchestral sound libraries and the like) have had their own mechanisms. But now there’s a single standard, and it’s part of MIDI.

MPE is exciting because it’s really playable, and it’s already got some forward momentum. Major DAWs like Logic and Cubase support it, as do synths like Native Instruments’ Reaktor and Moog’s Animoog. Hardware like the ROLI gear and Roger Linn’s Linnstrument send MPE, but there’s now even hardware receiving it, too, and translating to sound – even without a computer. (That’s not just weird keyboards, either – Madrona Labs’ Soundplane showed this could work with new instrument interfaces, too.)

Making MPE official should improve implementations already out there, and standardize inter-operability. And it means no more excuses for software that hasn’t picked it up – yeah, I’m looking at you, Ableton. Those developers could (reasonably) say they didn’t want to move forward until everyone agreed on a standard, to avoid implementing the thing twice. Well, now, it’s time.

More demos and product compatibility information is in the news, though of course this also means soon we should do a fresh check-in on what MPE is and how to use it, especially with a lot of ROLI hardware out there these days.

MIDI Polyphonic Expression (MPE) Specification Adopted!

Making instruments self-configure and work together

MPE you might have heard of, but there’s a good chance you haven’t heard about the second announcement, “Capability Inquiry” or MIDI-CI. In some ways, though, MIDI-CI is the really important news here – both in that it’s the first time the MIDI protocol would work in a new way, and because it involves the Japanese manufacturers.

MIDI-CI does three things. Here’s their official name, plus what each bit means:

1. Profile configuration – “Hey, here’s what I am!”. Profiles define in advance what a particular instrument does. Early demos included an “Analog Synth” and a “Drawbar Organ” draft. You already know channel 10 will give you drum sounds, and General MIDI drum maps will put a kick and a snare in a particular place, but you haven’t been able to easily control particular parameters without going through your rig and setting it up yourself.

2. Property exchange – save and recall. If configuration tells you what a device is and what it does, the “exchange” bit lets you store and recall settings. Last week, manufacturers showed gear from Yamaha, Roland, and Korg having their instrument settings saved and recalled from a DAW.

MMA say the manufacturers demonstrated “total recall.” Awesome.

3. Protocol negotiation – the future is coming. Actually, this is probably the most important. Profile configuration and property exchange, we’ll need to see in action before we can judge in terms of utility. But protocol negotiation is the bit that will allow gear now to build in the ability to negotiate next-generation protocols coming soon. That’s what has been commonly called “HD MIDI,” and what hopefully will bring greater data resolution and, ideally, time stamps. Those are features that some have found in alternative protocols like Open Sound Control or in proprietary implementations, but which aren’t available in standard MIDI 1.0.

And this “negotiation” part is really important. A future protocol won’t break MIDI 1.0 compatibility. Gear built now with protocol negotiation in mind may be able to support the future protocol when it arrives.

As musicians, as hackers, as developers, we’re always focused on the here and now. But the protocol negotiation addition to MIDI 1.0 is an essential step between what we have now and what’s coming.

No gear left behind

For all the convervatism of musical instruments, it’s worth noting how different this is from the rest of electronics. Backwards compatibility is important for musical instruments, because a musical instrument never really becomes outmoded. (Hey, I spent long, happy evenings singing with some violas da gamba. Trust me on this.)

The MIDI-CI adoption process here, while it’s not the most exciting thing ever, also indicates more buy-in to the future of MIDI by the big Japanese manufacturers. And that finally means the AMEI is backing the MMA.

Say what?

While even many music nerds know only the MIDI Manufacturers Association, significant changes to MIDI require another organization called the Association of Musical Electronics Industries – AMEI. The latter is the trade group for Japan, and … well, those Japanese manufacturers make gear on a scale that a lot of the rest of the industry can’t even imagine. Keep in mind, while music nerds drool over the Eurorack modular explosion, a whole lot of the world is buying home pianos and metronomes and has no idea about the rest. Plus, you have to calculate not only a different scale and a more corporate culture, but the fact that a Japanese organization involves Japanese culture and language. Yes, there will be a gap between their interests and someone making clever Max/MSP patches back in the States and dreaming of MIDI working differently.

So MIDI-CI is exciting both because it suggests that music hardware will communicate better and inter-operate more effectively, but also in that it promises music humans to do the same.

But here again is where the craft of music technology is really different from industries like digital graphics and video, or consumer electronics, or automobiles, or many other technologies. Decisions are made by a handful of people, very slowly, which then result in mass usage in a myriad of diverse cultural use cases around the world.

The good news is, it seems those decision makers are listening – and the language that underlies digital music is evolving in a way that could impact that daily musical usage.

And it’ll do so without breaking the MIDI we’ve been using since the early 80s.

Watch this space.

  • I’ll have to see if they’ve revised it from what I’ve seen.

  • Great news!

    Now, can we all agree on middle C? 🙂

  • itchy


  • That is both awesome and welcomed news!

  • Chis T.

    It’s fascinating how OSC has failed to take off as a genuinely used protocol. Too clever by half perhaps. There is something to be said for MIDIs simplicity. It just needed greater bit depth.

    • Fair. OSC remains useful for experimental projects, including outside of music. It’s remarkably easy to work with it in code and there are some tools which support it well enough. It still has its place. But not really in the marketplace.
      Standards are also about politics, backwards compatibility, and all the Everett Rogers criteria making an innovation easy/quick to adopt (relative advantage, observability, low-stake testability, compatibility with prevailing norms, and ease of use). OSC “fails” in enough of these dimensions.
      MPE is far from perfect. It doesn’t even solve half of the issues OSC solved, years before. It’s also more like a stopgap solution before HD MIDI delivers us. But it’s been a powerful wedge for ROLI, Geert Bevin, Roger Linn, Jordan Rudess, Haken, Madrona Labs, Eigenlabs, StageCraft, Bitwig, Tracktion, Apple, Cubase, Cycling ‘74, Native Instruments… That’s why the news that Ableton Live 10 still won’t support MPE was a qualitatively différent story from their continued lack of OSC support. If MPE has become almost mainstream (and a kind of sine qua non for manufacturers trying to differentiate themselves) and MIDI-CI just got a very important greenlight to open up the future, OSC remains “cutting edge” after such a long time that it’s a quaint thing on its own. In a way, that’s too bad. But it’s not too surprising.

    • onar3d

      OSC and MIDI solve two different problems, and are not competitors really.

      To put it simply:
      – MIDI is for a specific set of devices that have strictly defined namespaces.
      – DMX is for some (other) devices >>> >>>.
      – …
      – OSC is for the rest: those devices that do not have a standard namespace, but their very own, probably dynamic namespace.

      By namespace, I refer to all input and output parameters/values of a device type. In classic midi, the notes, CC messages, patch changes, aftertouch, etc. All this is fixed and inflexible.

      So yes, for this reason, OSC will never be as mainstream as MIDI. But, for whenever you want to do something not predicted by the standards, it is, and will be also in the future, a fantastic tool!

      (Some background on where I’m coming from: I’m in the process of developing an application for all these ‘other’ uses –

      • Dubby Labby

        But then something like Ableton link, doable by OSC since far ago but never didn’t took off neither, meanwhile link is speading as fire…

        • onar3d

          I’d answer the same way regarding Ableton Link, as for MIDI/MPE/etc – OSC and Ableton Link are not there to solve the same problem.

          Ableton Link defines a standard set of commands.
          OSC is for where there is no such standard set of commands, and there doesn’t make sense to have one.

          With OSC, any set of devices/programs/etc,can be interconnected, and made to talk to each other, that were never meant to – they never adhered to any common standard, other than that they would at least respond to, and/or receive OSC.

          It is always more convenient to use a standard (set of commands), when that makes sense (Link, MIDI…).

          But what do you do when you want a set of keyboard instruments, alongside full-body motion-capture suits to control a set of flying drones, a robotic church-organ, and flame throwers? OSC 🙂

          What do you do when you want the playing of a group of musicians to control complex real-time generated graphics, and the lights in the ceiling, and the smoke machines? Again OSC.

          You see where this is going!

          • We do get an idea of where these things are going. And they’re pretty awesome.

            OSC is really neat for all sorts of cutting edge projects as well as for one-offs. It’s kind of like Pure Data. Or Perl. Some might use these things in production. And they might maintain their usefulness long after some of the more “mainstream” approaches disappear. But there’s something fascinating about the adoption pattern. And the hype cycles.

            It might not come from OSC enthusiasts but there’s been a notion among some people that OSC could be a way to replace MIDI with something more forward-looking. In some conversations, people may casually dismiss MIDI and praise OSC in such a way as to make it sound like “MIDI is the past, OSC is the future”. Given MIDI’s limitations, there’s been a lot of hope building up as to what could replace it.

            Been guilty of something like this myself, at some points in time. Before getting into MPE through Eigenharp Pico and ROLI Lightpad, was placing way too much weight on OSC’s shoulders.
            For instance, when a Club framboise member presented a Raspberry Pi-based project which was sending MIDI through a laptop while using UDP for the project’s Pi-based sensors, couldn’t help but ask why OSC wasn’t used instead of MIDI (and mumbled something fairly uninformed about MIDI’s transmission rate; feeling quite guilty about this one). Also been investigating OSC-based tools for my own musicking projects, which are actually well-served by MPE. The fact that it was so easy to deal with OSC in things like Sonic Pi, ChucK, Supercollider, and Processing made me think that it should also become part of more mainstream offerings.

            Now that MPE and MIDI-CI are official, it might clear up some confusion. And offer some case studies for research in innovation.

          • Dubby Labby

            Sure and I know the potential of OSC. I was pointing the fact OSC multicast and timestamp messaging is probably far superior to Ableton link but never was adopted as sync protocol for regular users probably due its complex nature for those.
            Apps like midi sync link and similar made easy implement Alink into regular setups meanwhile apps like touchOSC or Lemur made regular users avoid OSC more than adopt it. OSC could have been an standard but it’s too nerdy. That’s its best pro and con.

          • Now I get you!

            I’ve never really used Ableton link since I just use OSC with my own stuff, I should read up on the nitty gritty more and support Link too eventually.

            I’m a bit worried about OSC’s nerdiness too, even more so given my time investment in programming The Wizard of OSC software.

            My hope is, once there is good namespace discovery, and good dynamic software that doesn’t force you to repeatedly type in address patterns all over the place to do anything even remotely useful (my philosophy with ‘TWO’), OSC might take off. But that’s far from a certainty…

  • max


  • Ashley Scott

    …so how soon before a manufacturer wants an extension to push a bitmap of their product down a serial cable?

  • Read this over lunch and it pretty much made my day. Since JUCE’s conference last year, and the roundtable with ROLI’s Ben Supper, got pretty excited about the possibilities, gaining the impression that some ratification might be imminent at least for MPE. Agreed that MIDI-CI is probably more important in the long run, but MPE is a gamechanger right now. Can only hope that the official backing will make it more so.
    In fact, my assumption is that JUCE can have a really important role, here. After all, it’s used by a lot of cool devs (and, as everybody knows, it’s owned by ROLI). If, say, Korg were to add MPE support to Gadget on both macOS and iOS, it’d suddenly open things quite a bit. And while Cycling ’74 already has some MPE support, this might give them a boost to add deeper integration in Max 8. As for the existing MPE-savvy DAWs (from Bitwig, Tracktion, Steinberg, and Apple), it suddenly becomes a “line in the sand” separating them from the others (Ableton, Cockos, Avid, Presonus, Propellerhead, Gibs… oh, right!).
    It’s also good news for mobile musicking. From Jordan Rudess to Moog, there has been cool work done iOS apps with MPE support. That includes things like ThumbJam and AC Sabre along with the iFretless series and a few other synths. If Kymatica and Audiobus start supporting MPE, that could open up new worlds for people.

    Been wondering if CDM could do a kind of explainer about MPE. In my experience, people tend to misunderstand it. They think it’s about doing vibrato on a keyboard or that you can emulate it by having a few knobs. They also tend to dismiss the Lightpad Blocks which, to this day, remain the least expensive MPE controllers around. Once you get the hang of them, it does make for fun and expressive musicking.
    Also hope we’ll get more MPE hardware, including synths. ROLI could come out with a hardware version of Equator. Kai Drange recently showed what could be done with an Axoloti Core as MPE synth. We just need some manufacturer to run with a similar idea.

  • PaulDavisTheFirst

    there’s a lot of idle chit-chat about applications “just picking up MPE”. i want to clarify this … uh … misconception …

    traditional MIDI features two note-specific messages – on and off – along with various channel messages. there is also polyphonic aftertouch, which is per-note, but is (a) rarely used (b) still conceptualized as closer to a channel message than a per-note one (despite the reality).

    Basically, with traditional MIDI, a note starts (with some given velocity) and it lasts and then it ends (with some velocity, not often provided).

    MPE upends all of this. a note starts, has potentially more than 120 parameters that may evolve over time, and then it ends.

    most/many of you reading this are not programmers, but i can assure you that the data structures and GUI displays you would design for traditional MIDI are VERY different from what you’d do for MPE.>

    so … “pick up MPE” isn’t some minor tweak. to do it right means a complete reconceptualizing of how a program stores, manipulates, plays, records and displays MIDI data. in a few cases, the developers were already thinking along those lines and so the adaptation to MPE specifically is not so hard. but in many cases, it really is a very deep change (at least to give discerning users what they will want, or will want soon enough)

    • max

      so lets say you are a daw,
      I have 10 fingers and 5 mpe parameters per note = 50 automation lanes (uh I probably want a more combined view of that)
      lets say you are a synth
      5 mpe parameters = 5 modulation slots

      • max

        I don’t see any difference in the way of recording & playing back the midi data?
        just record and play back what I am playing …

        • PaulDavisTheFirst

          you have to store the MPE (CC actually) data “with” the note data in such a way that editing will move the MPE stuff around, delete it, etc. along with the relevant notes. Conventionally, CC data has no particular connection to notes, and a DAW can couple it tightly or not so tightly, depending on user preferences.

          • Tekknovator

            True, check out vst 3 SDK note expression. Controller data can be directly attached to notes. So if you create a VST 3 host you might profit from that easing MPE impementation on mac, win and even linux.

      • PaulDavisTheFirst

        let’s say you’re a bug report.

        “I am trying to automate SynthFoo 2000 using FullSeq3000, and when I try to add more than 5 parameters per note, it doesn’t seem to work.”

        it is true that the emergence of synths with > 5 parameters per note is some way off, but the likelihood of that has just gone up a bit.

        the issue isn’t so much that there are more automation lanes. the issue is that the data in the automation lanes is semantically (not just by convention, but by definition) bound to specific notes. it’s not like current automation data which has a variable but generally weak connection to the notes. If you delete a note in an MPE-handling editor, then you should also delete any relevant MPE data. If you move a note, you need to move the MPE data. if you copy a note, you need to copy the relevant MPE data.

        • max

          IC, thx for the insights.

        • rimwolf

          This is not a new DAW problem with MPE, as wind controller players (and other “expressive” monosynth players) have been aware for years. It’s another case of the traditional keyboard orientation influencing the technical/marketing decisions, viewing a synth pretty much as a piano with some tone controls. (I guess what’s novel is that there’s apt to be more demand from MPE-controller players than from windies.)

          The issue for soft synths is much smaller, particularly if they’re already handling poly pressure or multi timbrality. MPE is a “player” rather than “producer” oriented facility.

          • max

            the odd thing for a synth player now is you are used to be able to comfortably edit the **** out of everything. Well this doesn’t work anymore, it seems like a bad joke considering its just some CC data and a little bit of channel splitting.

        • Tekknovator

          There is always chronological list views of midi input output. Nothing changes for troubleshooting and debugging. I own a Linnstrument and the whole existing midi ecosystem is working flawless! Midi ox on windows, midi monitor on mac, rtp midi for network etc all works great with mpe. Something strange, log the data as usual, done. EDIT: What I want to say is, it still stays a serial protocol, so as long as you have time context troubleshooting stays the same.

    • Tekknovator

      “…pick up MPE” is already done. MPE just glues it together in a standard. Check out how Bitwig and Cubase note expression allow you to edit the per note data directly inside piano roll. Both years old implementations. Logic can deal with it as well since a while (just not as beautiful). More interesting is how midi-CI will make sequencers, synths and controllers automatically configure themselves upon detection. This will, hopefulley, remove the chore of setting MPE up before beeing able to play.