Composing music is not unlike programming – and either, at their best, can be expressive. In the early days of IT (before “IT” was even a term), many computer programmers came from a musical background. (And even early in the computer age, there was more call for software than symphonies – and more pay.)

But what if you could program music easily, using musical syntax in a programming language? That’s the question asked by languages like Velato. The commands actually aren’t as esoteric as you might expect; they include references to standard pitch and commands like “Change root note.” The language expresses notes, mapped to the alphabet, a bit like teaching the computer solfege. Using additional expressions, you can transform notes and generate musical materials.

The results sound a bit like an academic-sounding ragtime. And yes, they do sound as though they were generated by a computer. (Have a listen to a .MID file.)

For more on Velato:
Velato wiki page @ Esoteric Languages
A compiler built in .NET (Windows-only, though if you really wanted to I imagine you could quickly port to Mono or other environments)
An introduction [Rottytooth blog]

Creator Rottytooth is Daniel Temkin of New York. Along the same lines is Fugue, which specifies notes as intervals (oddly, the same way I learned atonal sightsinging, but that’s another story).

So, what use is all of this? Creating languages for music could be a first step to being able to write compositionally-useful generative music algorithms. That could allow composers writing for games, installations, performance, or software to create interactive music that generates itself without sounding like a bunch of random notes. And having an elegant, musical language to do so could allow you to sketch ideas with just a few keystrokes.

In fact, I’d argue that sitting with a big, monolithic music editor, you might actually spend more time and effort than a reduced language, once you learn it. I’m not sure these are mature enough to use yet, but the idea is fascinating. And who knows, maybe you’ll someday see this as a scripting option in the sequencer you already use.

Code Your Own Sequencer? Archaeopteryx Generates MIDI with Ruby

Thanks to Grant Michaels, via Twitter, for the tip. (Grant’s Twitter feed includes lots of other goodies, too.)

  • Waffle

    It looks like verlato is a language for telling your computer to do simple things, like arithmetic and printing text to the screen. It just so happens that the source code the language takes is in MIDI. So it converts a stream of notes into text. Kind of a bummer, as I was hoping it was the other way around.

  • Max

    I don't know if these fall into the same category as Velato – but what about "supersets" of existing languages? I'm thinking of athenaCL (based on Python) and Symbolic Composer (based on Lisp), for example – both rather complex "music-language" environments for high-level composing.

  • Yeah, 'Symbolic Composer' was the first thing that came to my mind too (I own/use it). Not only is the musical 'spec'ing' extensive but it allows a multitude of 'transform mapping' possibilities, i.e. mapping one kind of data to another, in this case musical notes and durations from more esoteric sources such as..see the DNA transform in the demo as one example.

  • Most composers in the "contemporary classical" tradition use Lisp-based languages for computer-assisted composition tasks. Environments like Ircam's Open Music, Mikael Laurson (and others)'s Patchwork, and the new version PWGL aim at presenting a musical interface to Lisp-based algorithms. How does velato compare to this kind of programs?

  • I've played a bit with a few languages out there. Though this is the first I've heard of Velato. Definitely looks like something I'd like to play with.

    I've also experimented with writing my own custom musical micro languages. For example, I came up with a simple text-based drum sequencer that converts a string into a drum pattern. I've implemented it in Csound and as a MIDI file generator in Perl.


    One thing I love about the micro language approach is that it is fairly easy to prototype a custom syntax for a specific task. Since the syntax for these micro languages are usually limited, modifying it to work with other systems is straight forward.

    Great post, btw.

  • wait…what about the concept of the music box? a pre-printed form was used to feed a machine that spit out notes? the logic involved in creating those notes is programming. the reverse, using a treble staff/bass staff and assigning notes to the lines and spaces and the sequence creating code would have to follow a logic, the actual music may not be logical, but i would imagine that mozart's piece where you read the notes one way and then read the notes in the opposite direction is similar or a starting point.

  • Wow, okay, lots to think about just in these few comments.

    I can give a short answer on the "supersets" question — YES. In fact, I expect that will really be the future, partly because people may want to get beyond MIDI as the way of describing everything (useful as it is in some ways). And the choice of specific language (LISP, Lua, Perl, Java, Groovy, what have you) is probably not as important as how people use it musically — that is, what are you going to do compositionally with that power? 🙂 The specific dialect is more a matter of taste, but we do all speak the same language musically, so lots of potential for cross-pollination.

    I'm personally getting really interested in the combination of Groovy with Java, so I'll be investigating that route. (See also, JFugue, built in Java.)

    But yeah, the big question is, okay, how do I write generative music?

  • js

    well- taking it the other way, if you turn all/most musical parameters into numbers [MIDI], you can use pretty much any programming language.

    the way people think about music is different too, so instead of making a universal programming language for music, it might be more productive to turn musical data into integers so that it can be manipulated more easily that way by the programming language of choice.

    although this means that composers might need to think a bit differently about sound and notation than beethoven did [which is not a bad thing in my book]

  • poorsod

    It's a really interesting idea, though from the MIDI example, it really doesn't work at all: there are no real themes, little sense of metre or tonality, and it sounds almost unplayable.

    Humans will probably use technology to help them create for the rest of time, but it has a long way to go before a computer can actually take over the part of the creator.

  • @Waffle and co.: If you want to go from a music syntax to MIDI, you might be interested in Cosy

    I haven't had much time to work on this in recent weeks but sooner or later I'm going to try to pull it together for a real release. I'm planning to have it run as a standalone app and an add-on for Max/MSP (and of course Max For Live when that becomes available). There's some nifty features not shown in the web demo, like sequencing OSC messages and embedding arbitrary code inside the sequence.

    It's all open source Ruby code if anyone wants to hack on it.

  • stk

    <blockquote cite="Peter Kirn">Composing music is not unlike programming – and either, at their best, can be expressive.

    I kind of only half agree with this, being a musician/composer and programmer myself.

    The major difference being, of course, that there are such things as bugs in regular programming – nine times out of ten, a buggy piece of code will just not work.

    A "buggy" piece of music, however – well, that's far less cut and dry.

  • Mark

    Here at my school a lot of teachers are fond of Lilypond.
    Although that's more like a very accurate notation program.

  • mzo

    Depends on what you are programming. If you are programming as art bugs can be good. For example something like Processing in some respects welcomes buggy code as a means to experiment in the same respects as music. A programming language for music that takes the same mentality as Processing might be a nice step forward.

  • WhiteNoise

    Cosy is pretty nice. Very easy to understand.

  • stk said: 'The major difference being, of course, that there are such things as bugs in regular programming – nine times out of ten, a buggy piece of code will just not work.

    A “buggy” piece of music, however – well, that’s far less cut and dry.'

    I like that comparison 🙂

  • Max

    For me the interesting point of all these examples is the question of maturity and the question of "what use is all of this?". Some of these systems exist for years and are rather mature; they're used regularily by (some) composers. So there's already a history of this approach to take into account when speculating about it's usefulness.

    My impression is that it has evolved into a tool used by (some) composers to assist in composition. It's like "computer aided design": the tools don't paint/compose themselves but are used by the artists in the process of creation. I'm not shure we'll see really interesting completely "self-generated" music.

  • Not to ignore the point of the thread, but the stacked 4ths in the photo got me going on a nice Herbie Hancock-ish modal thang as I was reading the post.

    Who do I send the royalties to, you or Quinn? Actually, as it's a CC license, I know the answer already. 😉

  • Alex

    Peter, just for your reference, the headline says Verlato.

  • divbyzero

    Waffle: You want a language that converts a stream of text into notes? There are actually plenty of options, providing different levels of control and complexity. Things like CSound, ChucK, or Common Music, with assembler, Java, and Lisp -like syntaxes respectively, are powerful enough to do sound design, but can be used to layout whole songs if you have the patience. Things like ABC or my own Mish are focused on the notes — more like textual sequencers — so they make it easier to see the big picture.

  • You just got my mind spinning about the live composition possibilities of musical algorithms in video games. I can't think of any reason it could be done now. Does any game right now have anything like that?

  • Michael

    I hadn't heard of Cosy and Mish before – thanks for the references! They look like interesting variants on the ABC style of musical text entry, but with different twists that serve different expressive purposes.

    If a program is generating music more for human than machine performance, it will probably find that MusicXML is a better framework for MIDI. MusicXML is a language for representing common Western music notation, and is widely supported by the leading notation programs. The more that machine performance is the goal, the better that MIDI fits.

    Several composition toolkits support MusicXML export now, including JMSL, OpenMusic, and Synfire Pro. JFugue also supports it, and several programs offer some level of MusicXML support for abc.

  • Pingback: Guitar News Fix - week 1 02/09 | vBoogieman Rock and Metal News()

  • Pingback: Guitar News Fix – week 1 02/09 | Crunch Play()