It’s one of the best-known electronic sounds ever – perhaps the best electronic sound branding in history. It made its launch in 1983 – right before Star Wars: Episode VI – Return of the Jedi, no less.

But it seems the THX “Deep Note” was due for an upgrade. And that’s what it got last week. THX called upon the original creator of Deep Note, Dr. James ‘Andy’ Moorer, to remake his legendary sound design for modern theater audio technology.

Here’s a look at that history and where it’s come.

You can in the meanwhile watch the trailer here, though I think you’ll really want a THX-certified theater for this, obviously (through stereo headphones and whatever they’ve used to encode it here, it isn’t really distinguishable from the original):
http://www.thx.com/consumer/movies/120832135

This is actually the third major version of the Deep Note trailer. “Wings” was the first, heralding the arrival of Lucasfilm’s theater sound certification process. The one that probably springs to mind is “Broadway”, which features a blue frame on the screen. Less is more: that elegant rectangle plays against the holy $#*(& my face is about the melt off!!$#($*& effect of the sound. See the brilliant authorized Simpsons parody:

Tiny Tunes had fun with it, too (in the domain of parody, rather than an exact copy):

The sound itself is patented – and using it unaltered, without permission, can land you in hot water. (Dr. Dre lost a suit by Lucasfilm when he used it without permission in his song ‘2001’.)

But the history of the sound, and of Dr. Moorer, says a lot about the massive pace of creative technology in the past decades.

Dr. Moorer has four patents to his name and a series of lives in technology.

In the 70s, he co-founded and co-directed Stanford’s CCRMA research center, which continues to give birth to leading figures in music technology. (Today, superstars like doctoral student Holly Herndon go there, to study with teachers like Ge Wang who managed both to invent the ChucK programming language and reimagine the phone as an instrument with the hugely successful Smule.)

Dr. Moorer was also an advisor to Paris’ IRCAM, where he worked on speech analysis and synthesis – for a ballet company.

And he worked in research and development at Lucasfilm’s The Droid Works. There, he designed something called the Audio Signal Processor, the mainframe on which the Deep Note sound would be created – alongside pioneering sound design production techniques for Jedi, Temple of Doom, and more. That machine would eventually be sold for scrap, but its legacy lives on.

In fact, the ASP and the larger “SoundDroid” system around it read like a template for everything that would happen in audio production tools since. Listen to how it’s described on Wikipedia: “Complete with a trackball, touch-sensitive displays, moving faders, and a jog-shuttle wheel, the SoundDroid included programs for sound synthesis, digital reverberation, recording, editing and mixing.” Yes, touch displays, like the iPad. Hardware controls, like advanced studio controllers – years before they would become available for computers. Digital processing. Sure, we take this stuff for granted, but in the 80s, it had to be built from scratch.

And he worked with Steve Jobs at NeXT (which also would pioneer sound tech that would reach the masses later – the forerunner of today’s Max/MSP, for instance, ran exclusively on a NeXT machine).

Accordingly, Dr. Moorer has an Emmy Award, and an Oscar.

And now he’s Principal Scientist at Adobe Systems.

And he repairs old tube radios and plays banjo, says Music thing.

He is the most interesting digital audio engineer in the world.

He told Music thing the full story of the THX sound, built on a massive mainframe – no DSP chips could be had at the time.

As he tells it:

“I was asked by the producer of the logo piece to do the sound. He said he wanted “something that comes out of nowhere and gets really, really big!” I allowed as to how I figured I could do something like that.

“I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece.

“The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz”.

“The oscillators were not simple – they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand – based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.

For more, check out the 2005 Music thing story:
TINY MUSIC MAKERS: Pt 3: The THX Sound

The other interesting thing about the story told to Music thing is that the piece is essentially a generative performance. Random numbers mean each time the code is run, it “performs” a different version. So some of the recognizable features of the THX recording are very much the outcome of a particular performance – so much so that, when the recording was temporarily lost, people complained.

I’m going to try to get hold of Dr. Moorer to find out how the new piece was created, as press materials (naturally) fail to go into detail. But part of the reason you’ll want to hear it in a theater is the mix: there are three different lengths (30 seconds, 45 seconds, and 60 seconds) each of them made with stereo, 5.1, 7.1 and Atmos mixes.

And yes, I definitely hear a similarity to Xenakis’ Metastasis. In fact, the technique described above in code is similar to the overlaid glissandi in the Xenakis score – and perception will do the rest.

Perception itself is interesting – particularly the fact that the design of the sound, not its actual amplitude, is what gives it its power. (Lesson to learn for all of us, there.) Even with the sound turned down, it sounds loud; sound designer Gary Rydstrom has said that this spectral saturation means it “just feels loud.”

It’s also been a model for recreation – a kind of perfect homework assignment for sound design coders. For instance:

Recreating THX’s Deep Note in JavaScript with the Web Audio API

Writing for his blog Earslap, Batuhan Bozkurt has a masterful recreation of the Deep Note sound in the coding environment SuperCollider. Whether it sounds exactly like that original recording I think isn’t so important – just working through the basic technique of reproducing it opens up a lot of techniques you could expand into other, more personal expressions.

This is a great article and well worth reading:

Recreating the THX Deep Note [Earslap]

It has Dr. Moorer’s seal of approval; he writes in comments: “Thanks for the trip down memory lane, and congratulations for a job well done. I really wish I could share the details with everyone. Maybe someday! Let 1024 blossoms bloom . . .”

And it’s also notable that the SuperCollider language can run on a $25 Raspberry Pi comfortably – no Lucas mainframes in sight. Coding is also something that’s opened up to countless young men and women around the world in typical music classes.

Think about that: what was once the domain of a tiny handful of people in Hollywood is now something you can run on a $25 piece of hardware, something you can learn with more ease than finding a violin teacher. Indeed, only education and literacy are the final, if significant barrier. With that knowledge and basic technology access, the most advanced and unique computer music technique of my own childhood is now nearly as accessible worldwide as opening your mouth and singing. This says a lot about the power of access to ideas in the modern world – and it makes it even less excusable that there are the significant gaps in gender, in economic status, and in geography.