The first law of musical robotics: rock hard.

We’ve seen plenty of robotic musical experiments, but finding a robot that can seriously shred is another matter altogether. Meet the robotic string instrument, Poly-tangent, Automatic (multi-) Monochord – let’s just call her PAM. Built by Expressive Machines Musical Instruments, a group of University of Virginia PhD students and composers, PAM is capable of creating raucous musical performances like the one above, by composer and EMMI member Steven Kemper.

Musical robotics is cool, but it also hasn’t evolved much technologically in fifty years. It’s gotten cheaper and more accessible, but the fundamental design hasn’t changed – and that accessibility hasn’t translated into widespread use.

Now, the EMMI crew, in anticipation of a residency at Amsterdam’s famed STEIM research center, are hoping to take robotic music to the next level. MARIE is a project to put robotic music in a form that you can easily take on the road. They want to make the project open, so others can benefit, complete with schematics and code.

There are several aspects that make the MARIE project special beyond just road-ready design. The new instruments are intended to be more modular and controllable, to make the robotics as flexible as classic MIDI and analog modular gear has been. They also benefit from acoustic sound creation, controlling columns of air and physical strings instead of just digital or electrical models as on synths.

Acoustic design is at the heart of the EMMI robotic instruments – part of what makes robotics a compelling medium for new, digitally-controlled soundmakers. All photos courtesy EMMI.

To fund their vision, the EMMI crew have started a Kickstarter project. You get something in return from your investment, including even training on robotics and good, old-fashioned instruments like the sax and bassoon. (That should put to rest any fears that these guys want a robot-only musical future.) Here’s how they describe their work:

MARIE are a set of virtuosic and expressive music robots that are portable, reliable, user-friendly, and fit within the dimension/weight limits for international checked baggage. In other words, these are music robots for touring musicians. The hope of EMMI and the EAR Duo is that the usability and portability of MARIE and similar music robots will finally push this powerful technology out of research labs and onto stages around the world. Within this aim, the entire project will be publicly documented online and the source code and hardware diagrams all provided as public knowledge for other enterprising musicians and technicians to construct similar robots.

EMMI-ers, I hope you keep CDM posted as you go. It looks like a very worthy project indeed.

Fundraiser for MARIE, open music robots for touring musicians [STEIMblog]

Expressive Machines

MARIE: a virtuosic band of robots made by and for musicians [Kickstarter]

  • oiaohm

    This has picked up the same problem syntheses have.
    Human vs Machine problem. Human will not 100 percent hit the same location on the guitar every time no matter how well trained they are.
    Yes human playing music has a percentage of fuzzy error. Not enough error to ruin it but enough that each time they play its different in different places..
    The true battle about making a machine play an guitar should be to get more human like for performance. For tuning no fuzzy error would be good. But that would require a rig change to take an not altered guitar.
    Sorry to be critical I just see this as an item halfway in the middle. So not really suiting either well.

  • Cheech

    Maybe with a violin, yes, but with a guitar, you have what's called 'frets', which make it so that if you press the string behind that fret line, the same sound is produced no matter where your finger is behind that line. As long as it doesn't go behind the next fret line, that is.

  • Matt Picone

    What HORRIBLE tone! Someone get these people an Axe-Fx already. Also, I think Rusty Cooley is already faster 🙂

  • richard

    cheech, a guitar has frets but there are micro-differences in tone and pitch resulting from where a string is fingered (immediately behind the fret compared to all the way back immediately in front of the previous fret). in addition, there are more differences introduced by intentional or unconscious micro-bending of strings. these could be reproduced by robots but would need additional programming and possibly some ai routines.

  • Frets do normalize the note that's played if you don't bend the note but there is also the timing with which each note is struck. What people think of as funky or soulful often has to do with hitting the not not quite on the beat. That being said I don't think any of this totally beyond being taught to a robotic performer, it's just going to take a while to get it done well. Adding a little randomness to the performance is easy, adding soul will take a little more work – but remember, this field is in its infancy.

    Now all that being said you should also consider that the purpose of robotic performers might not be to simulate human performance but rather to do things that humans could never do. Consider the work of Conlon Nancarrow who wrote pieces for the player piano that are pretty much impossible for a human to play. This is just the tip of the iceberg.

  • bob

    richard, having played guitar for 23 years, i find your comment to be completely rediculous

  • Mike & Elizabeth

    Sorry, PAM has no soul (yet)…  If you watch what a classic blues performer can do, you'll notice there is quite a bit more that can go into playing.  How far to pull the string, wiggling fingers, moving the instrument around to achieve sound contours, sliding fingers and so on.

  • What's the purpose of having robots play acoustic instruments? Is it just because? I'm not trying to play devil's advocate (maybe a teen teeny bit 😉 ), I'm actually curious as to what the objective is in getting robots to musicians

  • J. Phoenix

    I have very much the same feeling I had when seeing Pat Metheny's robots…that this opens up fascinating possibilities for fusing acoustic and digital music together.

    The commentary so far reminds me of all the arguments I've read about player pianos and drum machines over time. Everything and nothing change.

    Congrats on being slashdotted!

  • I'm not sure it's the best use of robotics to imitate a shredding electric guitar player. It's ear-catching, no doubt, but it seems to me the point of using robotics ought to be so that we can do things human players CAN'T do.

  • Ethan

    Jeremy Abel: why didn't you make friends with an engineering student? At least show one your schematics and I bet you could have gotten somebody interested enough to take them in and help make it happen.

  • Whoa. nice black metal guitar tone.

    The second video degraded weirdly into a sales pitch, though I have to admit that I lost interest before that.

  • stk

    I have a passing interest in robotic players, will be interesting to see how the future pans out. Robotics + Japanese "virtual" pop stars?..

    Still, man or machine, (musical) crap in == crap out.

  • jaklumen

    Okay, so why don’t we have some comments from electronic/synth artists yet? I mean, c’mon, there’s always been a problem with replicating acoustic instruments on analog and digital hardware, especially stringed ones.

    “The new instruments… more modular and controllable… They also benefit from acoustic sound creation…”

    Those should be KEY phrases for anyone that’s worked with electronic instruments over the past five decades or so (yes, including the theremin, and of course, Moog instruments). This is a bridge of sorts between analog and digital, so analog enthusiasts SHOULD be getting excited (has Vince Clarke and Wendy Carlos read this yet)?

    As before, enterprising musicians will probably use these projects to create backing tracks, and still rely on their human capabilities (and fully acoustic instruments) for the solos. It should augment what they’ve been doing for years already, not replace it.

  • why not make it to just play guitar hero instead, that would be sick

  • A while ago back in college, it was my goal for a while to create a robotic guitarist that could play any regular electric guitar, at inhuman speeds. Each string had it’s own picking mechanism, as seen here:

    I had it all mocked up with about 50 pages of documentation, designs, and cad drawings, but unfortunately they wouldn’t let me use the CNC machines because I was a design student, not an engineer. And while I’m decent at machining, the tolerences for error were rediculously small, so a CNC would have been necessary to finish it in any reasonable amount of time. Hopefully someone with more resources can do something like this. I’ve always wanted to see a guitar play at speeds similar to Conlon Nancarrow’s player piano pieces.

  • Alas, Pam sounds mechanical. There is something warm and lively that is missing. I am a cellist and my son is a flesh and blood shredder…plays death metal, which is driven by a passion for precision and machine like delivery. Nevertheless, it is rich with a human element, the little personal tweaks driven by emotion of the moment, effecting vibrato, and string bending, sustains, and improvisations arising out of the passion of the personal experience of playing, and interacting with audience. Of course this is a great achievement and I applaud the forward thinking efforts of PAM's creators.

  • Ethan: believe me, I was friends with everyone in the robotics club at school, but I wasn’t given much budgetary support because this thing didn’t have wheels and my goal wasn’t to have it work autonomously, but rather to be a tool, like a player piano only a player guitar. My own definition of robotics includes that in order for something to be robotic, it needs to have some kind of artificial intelligence. We didn’t have much of a budget, and eventually my design studies took president. If anyone is interested, I can try to dig the docs up.

  • Andres

    I think the most important thing in a musical performance is what happens between the notes, between the sounds.
    The tiny changes that are not measurable, and not only in frequency (to not continue the discussion about the guitar), but also in time, timbre and dynamics, make a performance and music make sense to me. And this is just a human phenomenon.
    therefore I believe that the option remains are realtime processing, CV and gestures recognition, etcetera.
    (sorry for my poor English)

  • Russ

    Very impressive first effort but it sounds more like a keyboard sampler being played than a real guitar. Plus it needs a few other dimensions of articulation: changing pick angle & location along the string, bridge & finger vibrato, and bending. Also, the actuators are very binary (on/off) when they need to be continuous. It would be helpful if it had sexy long hair too…

  • Mie

    bob, on acoustic guitar this pitch change even more noticeable. Especially on bad ones, better guitars seem to have also better response on pressure.

    When you’re pressing string against fret, you’re actually bending it. Pressing harder you get higher pitch than pressing light. Good players are have more consistent pressure and get more tuned sound.

    Take a flexible tape measure, tape it between two chairs, and measure how long it must be when touching ground. Very basic geometry. Also your finger tip isn’t solid, it’s size varies very much depending of pressure.

    As engineer, and guitar player, I find it very interesting how and why even smallest differences change pitch and tonality of sound.

  • Ben L

    I think it would be quite interesting for the work exhibited here to be combined with some work being done at Edinburgh University on having computers use machine learning to compose music:

  • I agree with Carolyn here!

    One thing though: please hook this thing up to a decent amplifier to give it a nice tone (Engl, Mesa Boogie, Peavey 5150 etc.)

  • Christine

    This is really cool! The robot may not be perfect, but it will be. People didn't want to believe that one day, a computer would outperform humans in chess. They do now. One day, a robot will outperform humans in playing the guitar.

  • And O: I didn't hear anything a human cannot do…

  • fel0nious

    I think as far as playing more 'humanized' music, or 'tougher' styles like soulfoul or funky could merely be thought of a different way. It's interesting that we have yet to really quantify musically, even though it would certainly be possible, the style in which something is played. (Granted, perhaps it adds the mystery and excitement of a musician's 'individual' take). To me, the meta-rhythms which make up those styles and contain "not enough error to ruin it but enough that each time they play its different in different places" really don't sound out of place because the 'errors' still result in a cyclical pattern – but over a longer time signature than the signature of the explicit notes on the sheet music. I think the process of 'adding randomness' is really possibly much simpler than we make it out to be as we tend to attempt to solve those kinds of problems in a 'brute force' way – through explicit mimicing of human behavior, which always tends to sound flat and boring.

  • Tim

    Is that supposed to be entertaining? How about a robot that will eat food for me or a robot that will dress me in the morning?

  • I'm excited by this, but I'd like to see totally new instruments using real acoustic sound generation and robotics, not solenoids on a centuries old instrument. A human will always be superior to a robot, because the tool was originally made for a human player. Somekind of box with all kinds of interesting things to be plucked or hit by a robotic part, totally controlable and configurable, would be killer.

  • Sorry to spoil anyone's picture, but the biggest variation in a human's result when fretting a note on guitar is the amount of pressure the string is pushed down with.  Try it (if you have a guitar) – you can change a note over a semitone with just pushing the string down harder/lighter between frets.

    Aside from that, strumming and picking velocity, string-bending/vibrato, as well as rhythmic elements to all "dimensions" of playing are also significant sources of human "feel/style" that can't be reproduced by machines – well, except via recordings 🙂

  • I've no beef with explorations in music technology, but every time I see a 'musical robot' the urge to rant about people needing to play music together and the value of privileging live performance becomes irresistible. I'd say, admittedly with doubt in my heart, that the pleasure of playing with another person is music's driving heart – a spirit which can infuse great composition, performance, and recording. In that sense, musical robots should be an instrument, or even a physical avatar for some virtual, human networked presence. /rant

  • Charles Baker

    I would hope people keep in mind the early stage in the development of robotics that we are still at:
    the equivalent in digital music synthesis would be back in the early 60s at Max Mathews' Bell lab, with the "wonderful" sounds of "Daisy" and he early Risset sounds. (hmmmm)
    Of course these early experiments are weak, musically; BUT they have developed into a quite rich and expressive musical field of digital synthesis. Please give robotics a similar time to develop :
    robots might be 'old school sci-fi', but their reality is *just now* beginning to appear in the 21st century. I hope I will live to see some of the wonders to come.

  • FreakWithoutACause

    Considering how difficult it is for many live musicians to get a reliable live drum, the drum section of the Kickstarter video looks more promising than the guitar stuff that's sparked the debate detailed above. But, in an effort to get the conversation BACK on track, consider this: Frank Zappa once wrote "many musicians are convinced that in order to get The Blow Job after the show, they have to play LEAD GUITAR." Could PAM be designed to get the programmer laid (for once 😉

  • Peter

    I realise I'm  late in the game with this comment.

    It seems to me that machines playing acoustic instruments are interesting mostly in the context of generative music; for example, a human performer may be controlling a compositional process in realtime while the results are realised on an actual acoustic instrument; the human element is making high level decisions about the musical language, live, free from the mechanical burden of articulation.  This could include precise algorithmic control of parameters like – in the case of guitar – pick position / angle / material (and potentially countless other things), allowing the composer / performer to work with an intricate play on the varieties of timbre that the instrument can produce.

    It doesn't mean that human performers become redundant… Anyway, I think it is also interesting the way projects like this encourage us to challenge ourselves, rather than being complacent about how 'special' we are as humans…