Not available in stores: the custom touchscreen solution, running an original sampler, that turns Hans Zimmer’s musical ideas into reality. Mark Werry is the person who made it all possible.

Computer innovator Alan Kay famously said, “The best way to predict the future is to invent it.” Mark Wherry is doing as good a job as anyone of inventing that technology. Powering scores from the latest Batman films to Inception, working closely with Hollywood’s leading meastro Hans Zimmer, the work Wherry is doing really does invent instruments in order to invent sounds. New samplers, new touchscreens, new rigs all have to come together just to keep up with the feverish sound design demands of film and game titles. And with sophisticated surround delivery, at a time when studio veterans complain about the loss of “fidelity,” these sounds get heard more clearly than anything in the history of recording.

And yes, he does all of this with his own code, and big using Windows touchscreens – no iPads in sight.

Our own Marsha Vdovin talks to Mark about his work and career, in a way I think will be aspiring to budding technologists and musical dreamers alike, whether trying to break into the industry or find a breakthrough new instrument in your music. -Ed.

CDM: What exactly is your position there at Remote Control Productions?

Mark: I have a rather grandiose job title: Director of Music Technology. That’s meant many things over the years, but what it means at the moment is developing our own sampler, touch screen software, networked audio and MIDI systems, and all these kinds of toys in the technological realm to assist in the creative workflow.

Wow, that’s a great position to be in, how did you get into this job?

Well, it was funny, most people are interviewed by their prospective boss for the job, but I sort of did the opposite. I was working for Sound on Sound [magazine] in England and I did an interview with Hans [Zimmer] back in 2002. I was also working on a Cubase book at the time and just thought, since he was probably the world’s most prolific Cubase user, I’d try e-mailing him to see if he’d be interested in writing the forward. That was just around the time when Cubase SX had come out, and he said he hadn’t really had a chance to play with it that much, but it sounded like I knew what I was doing, so maybe I could come out and show it to him. So I did, and I guess we must have got on okay. A few months later, I ended up moving over full-time to work with him, and, of course, once I was here I never had time to actually finish the Cubase book.

Can you describe the systems there and how you’ve worked in the custom software?

The main sequencer that Hans uses is Cubase and has been for the last twenty years or so. We’re mostly Windows-based now, which I think people often find surprising. All the samplers are Windows. The only Macs we really use are for running Pro Tools, and that’s more of a legacy thing. I think it’d be interesting to see if we could go to Windows for Pro Tools as well, because it gives you a bit more freedom in the kind of hardware you can use, especially since it’s sort of unknown what Apple’s long-term plans are for the Mac Pro.

Each rig usually consists of a sequencer, and then we have about fourteen computers that run our custom sampler. These are all Dell servers with between 24 and 64 gigs of RAM, dual processors, and 8 to 12 cores — fully decked-out systems. Then we have a couple of mixer computers that basically collect all the audio from the samplers, mix it together over the network, and then that goes into a big Pro Tools system via a normal audio card. We always have as many interfaces as it’s possible to have. In fact, we’ve been running 160-input systems for the last few years, and now we’re looking to move to Pro Tools HDX because 160 inputs are just not enough!

That’s quite a system!

Well, almost all of the custom sounds run in quad, which eats up resources very quickly. That suddenly divides your input count by four, so we really do need lots and lots of inputs. There’s a great deal of sub-mixing that goes on before we even get into Pro Tools, which means that printing synth tracks just takes ages now, since we can only record so many tracks at a time and we need the separation.

How are you moving all that audio around? What type of audio interfacing?

We tend to use the RME stuff as much as possible and have done so for years, mainly because they’ve always had the most reliable drivers. These days we’re mostly using the MADI cards, so it’s MADI from the sequencer into Pro Tools, and MADI out of what we call the NetMix computers that run the sampler audio output.

A view of the original software that keeps the sounds coming.

You also mentioned that you developed some touch screen technology?

That’s another element of the way we work. We’ve been using touch screens since 2004, starting off with a little Windows CE panel that had buttons to do shortcuts for Cubase. We gravitated to an XP-based system in ’06, and then, recently, for The Dark Knight Rises we’ve just put in a really nice 22-inch 3M multi-touch screen that runs with Windows 7. You can create all sorts of faders, shortcut keys, and little sequence oriented things. Originally, some people said, “Why don’t you just use the iPad for this?” And although the iPad’s really nice, it’s quite a small display if you want to have a lot of controls visible at once.

What program is running the touch-screens?

That’s another program I wrote. It’s written completely native for Windows 7, supporting multi-touch and Direct2D for the graphics, so it looks quite pretty. It was written from scratch, and while this new version is a little rough around the edges, one of the advantages of doing this in-house is that it doesn’t have to be as polished as it might be for commercial release. We don’t have to focus on every feature that might be needed by users. We can just focus on the one user — who does tend to be rather demanding anyway, but…

It seems that would really add to productivity.

Oh, yeah, Hans just loves having it. Part of it has shortcut keys for Cubase, and some of the controls are for the samplers. So rather than doing key-switches on the keyboard for changing articulations, like short strings or long strings, it’s all on the touch screen, which makes things a little clearer and easier to see what’s going on.

It’s also used for the different fader controls that we have for the various instruments, because one of the other things about the sample library is that it was recorded as a multi-mic library from the very beginning when we started on the new one in 2004. When I say multi-mic, I mean it was 16 microphones wide. The point being that we could run the sample library exactly as it would be if it were a real recording. Of course, as time has gone on, we’ve added more and more mic positions to the whole thing. I think now we’re recording with something like 33 microphones.

If we had enough computer power, we could actually run the whole library 33 channels wide, though that would be a bit of a nightmare. But what we can do, which is sort of fun, is to take our 33-channel instruments and do bounce-downs within the sampler. We usually bounce to around seven or eight channels, so that each sampler voice is seven or eight channels mixed into quad.

Because of the complexity of the mic positions and the way that the instruments are handled, there are a lot of controls, so it’s nice to have a touch-screen in front of you rather than having to click around with the mouse, and trying to remember which MIDI controller does what. Sometimes Hans spends a long time moving things around on the screen, trying to come up with the most ergonomic workflow.

I know Hans previously used GigaStudio, is the new library based around that?

No. We used to rely on GigaStudio, but when we got to the end of ‘06 and were just starting on Pirates of the Caribbean 3, Hans wanted to use some of our new sounds. Some of them had been programmed as Giga instruments, but it would take a really expensive computer just to play back just the short violins, because, at the time, Giga was 32-bit and didn’t support multiple cores. In fact, at that time, there were no 64-bit, multi-core samplers available.

We tried a whole bunch of things, like using GVI within a multi-core host, but because it couldn’t see the memory of the other instance, there was no way of doing what we needed to do without making the instruments significantly simpler, or just using stereo and not using quad. But we thought, “Well, what’s the point of that, after spending all this time and money to create these incredible-sounding instruments?” So, in one of those moments that you live to regret, I thought, “Well, maybe I can try to cobble something together that just does what we need.” You know, “How hard can it be to write something that’s 64-bit, multi-core to work with strings?”

[Laughs all around] I didn’t know you were a programmer as well.

I wasn’t really a programmer, and I’m still not, but I kind of like fiddling around with this stuff. That was Christmas ‘06, and I played around for a couple of weeks. After Christmas we had something that could, on one computer, play back what we’d previously needed four computers to do. So that was good. Then we did the same for the long strings. At that point, it was just a very specific system to play back certain patches and palettes.

Did you develop that in C++?

I did.

You have your own mixing stage there as well, don’t you?

We have three mix rooms here now, and each one is based around a Euphonix System 5. Usually, the music score is mixed here, and then it goes to the dub stage. We have worked abroad sometimes, and we’ve gone to the production studio on occasion. For Batman Begins, we spent three months at AIR Studios, pretty much taking over the whole building. Then for Pirates [of the Caribbean] 3, Hans moved his writing rig up to Disney, just to be close to the editing room. But on the whole, we mostly stay here. We’re pretty much self-contained, which is really nice. There are many people that work here now — engineers, mixers, composers, technicians — so there are quite a lot of people around if something needs to get done.

Do the other composers have access to the same master system?

Anyone who works here can use the samples if they want. Which means, of course, they have to spend a ton of money on some very powerful computers. Some composers do it, and some don’t. It’s up to them. In a way, I prefer as few people to use it as possible, because it means fewer headaches for me! [Laughs] But it’s quite nice to see the stuff get used, and some composers do use the bouncing features to remix the whole library to their own particular taste.

You must have the most stressful job!

It can be. I remember when we did The Dark Knight in ‘08, it was the first time I had a go at doing this network audio stuff. I remember thinking at the time, “God, I really hope this works!” Because we would have been kind of screwed if it hadn’t.

There were times in the early days, since it was just so unproven, that I was really nervous about things crashing or dying, but it actually has turned out to be okay. I think part of that is, again, there’s a simplicity in having a limited set of users. I know there’s stuff that does go on that I don’t always hear about, but people are quite good at just working around the bumps and getting on with it. Unless it’s something fatal, I tend not to get the midnight phone call.

But, having your own customized system must give you a lot of freedom.

I think it really does give everyone a creative advantage, especially Hans. On Dark Knight Rises, for example, he said one night, “Would it be possible to have a fader that converges all the notes of a held chord into one pitch — kind of like a polyphonic pitch bend?” Within an hour or so, I’d written a little plug-in into the sampler that could basically do that. So I think there’s something to be said for not being completely reliant on other companies, having to call them and say, “Hey, we’d really like this feature!” or “Is it possible to script this?” Because we’re doing our own stuff, it gives us a little more flexibility, and it’s a hell of a luxury. We have six people that just do sample content and instruments for us, three people in Germany and three people here. Claudius Bruese is in charge of recording and developing the main orchestral palette in Germany, and he’s been a great collaborator in getting to where we are now in terms of the quality and playability of the library.

That’s quite a team.

It’s unusual for a film composer to have this level of development in-house. But I think my job is basically created because Hans is really obsessed about how technology can help in what he wants to achieve as a composer.

Unofficial site: Mark Wherry @ hans-zimmer.com