Unreal Engine may be built for games, but under the hood, it’s got a powerful audio, music, and modular synthesis engine. Its lead audio programmer explained this afternoon in a livestream from HQ.
Now a little history: back when I first met Aaron McLeran, he was at EA and working with Brian Eno and company on Spore. Generative music in games and dreams of real interactive audio engines to drive it have some history. As it happens, those conversations indirectly led us to create libpd. But that’s another story.
Aaron has led an effort to build real synthesis capabilities into Unreal. That could open a new generation of music and sound for games, enabling scores that are more responsive to action and scale better to immersive environments (including VR and AR). And it could mean that Unreal itself becomes a tool for art, even without a game per se, by giving creators access to a set of tools that handle a range of 3D visual and sound capabilities, plus live, responsive sound and music structures, on the cheap. (Getting started with Unreal is free.)
I’ll write about this more soon, but here’s what they cover in the video:
- Submix graph and source rendering (that’s how your audio bits get mixed together)
- Effects processing
- Realtime synthesis (which is itself a modular environment)
- Plugin extensions
Aaron is joined by Community Managers Tim Slager and Amanda Bott.
I’m just going to put this out there —
— and let you ask CDM some questions. (Or let us know if you’re using Unreal in your own work, as an artist, or as a sound designer or composer for games!)
Forum topic with the stream:
Unreal Engine Livestream – Unreal Audio: Features and Architecture – May 24 – Live from Epic HQ