It’s a way of going truly off the grid – MIDI Tape Recorder captures and recalls performances as sample-accurate data, with a workflow like tape. It’s free for Mac and iOS now, from developer Geert Bevin / Uwyn.

Even with quantization turned off, most MIDI sequencers (hardware or software) align events to some kind of larger grid for editing purposes. That’s honestly fine for a lot of our keyboard noodling, but it’s another story once you start to record and playback performances from input devices capable of more detailed intonation and expression.

Enter MIDI Tape Recorder. For end users, it’s an AUv3 you can use with macOS and iOS (a MIDI FX plug-in), and it’s free and open source. But for the larger scene of musical instrument creation, it’s also an important proof-of-concept of a different way of doing things.

It works like this: you record a performance exactly as you play it. You record as you would a digital audio recording, but with actual stored events that could be edited later or reproduced. (MTR itself doesn’t really edit directly yet, as it focuses on the looped recording part, but filtering or editing features are possible in the future.) Recorded data’s timestamps are accurate down to the level of individual audio samples – so, for instance, 44,100 Hz instead of the usual parts-per-quarter measurement of conventional sequencers. And that includes not just the note data itself, but crucially all the other expression and intonation you might play. Accuracy is limited by the transport and device or application sending data – and now many USB devices or apps have greater precision than conventional cabled MIDI connections would.

That’s a really dense stream of data, so the idea is to capture four independent tracks of voice messages, display what’s happening in realtime, and play them back.

This captures voice messages, not system messages for now, but even with tuning schemes like MTS-ESP that would mean the ability to capture subtle bends and pitch as you play. (You’ll need something else to handle system tuning messages.)

Geert has long been an advocate of MPE (polyphonic expression data), even helping contribute to its larger adoption, as well as doing great stuff with apps and expressive controllers (including for Moog). So what’s great about this – as with MPE itself – is it elevates a much-desired idea into some real form folks can actually play.

And playing is really what you’ll do here. Since that data is so dense, you won’t micro-edit it the way you might conventional MIDI notes and CC. You’ll instead do what you do with audio – play another take. Punch in and out. Overdub. Play until it gets right. It’s a very human way of working, really, whatever your chops.

You get three dimensions of expression across 15 channels – that’s a lot of power beyond audio, in that you can exactly reproduce a performance. I actually hope this helps drive MIDI effects processing on desktop hosts, since right now iOS hosts have far better support.

I’ll just reproduce the full feature list. It’s brand-new, so your mileage may vary – especially since MIDI FX AUv3 and Apple’s Catalyst tech is still in a sort of teething phase. (I see there’s an issue with Logic on Intel machines as I write this – but go grab it, test it, and see how it’s working.)

It’s also all open source, and there’s some very readable C++ code in there. So I expect other developers may poke around and consider making functionality like this in their own apps or even trying their hand at making something similar for a platform of choice.

Check out everything on the project site, and then if you’ve got compatible Apple hardware, grab the free app for Mac / iPhone / iPad from the App Store:

Donations welcome (via PayPal).

And check out the code on GitHub:

Full features:

• Four independent tracks for recording MIDI channel voice messages

• Sample accurate MIDI recording and playback

• Real-time display of active recorded notes and other received messages

• MPE support

• Multi-level undo and redo

• Overdub recording

• Punch in and punch out recording for automated regional overdubbing

• Automated storage and recall of all recordings inside the AUv3 host project

• MIDI file import and export for the project or each individual track

• Repeated playback with start and stop locators

• AUv3 parameters for all controls

• Snap to beat option for positioning playhead and start/stop locators

• Detection of MPE configuration message (MCM) reception for each track

• Envoy of MCM at start of play or when pressing the track’s MPE button

• Host transport and host tempo sync

• Clear all recordings or clear a single track

• Crop session to new duration

• Fully resizable UI

• Activity indicators for MIDI input and output on each track

• Optional tool tips for every operation

• Optional per-track record enable, input monitoring, and mute

• Four virtual MIDI cable inputs if the AUv3 host supports it

• Support for AUv3 user presets if the host supports it

• Optional routing of first virtual MIDI cable to all tracks

• Fully open-sourced under Creative Commons Attribution 4.0 International, an approved Free Culture License

Updated: now with video tutorial!

There’s a reason to mention intonation as a use case in addition to “expression”: