Dan Phillips with Korg Research & Development is one of the designers behind Korg’s ultra-premium Korg OASYS keyboard, among many other projects. In addition to getting to design electronic musical instruments for a living, Dan is an electronic singer-songwriter, writer, composer (Fox TV, As the World Turns), consultant (Santana, Emily Bazar), producer, remixer, and even amateur photographer. Dan shares with CDM how the OASYS was born, some of his favorite (non-Korg) synths and software, and how he got what for many of us would be a dream job.

PK: How did you wind up doing what you do? What’s your musical background?


DP: My parents were both scientists (PhDs in Chemistry and Physics, respectively), so it’s probably natural that I would mix my interests in music with music technology. I started writing songs way back in elementary school. I was studying drums (later, I studied piano as well), and also played keyboards a little. None of my friends had bands at that point, so I didn’t even think of that as an option; instead, I recorded onto two cassette machines, bouncing tracks between them, one by one, to build up a layered “multitrack” recording. That led naturally to a four-track (luxury!), and some early synthesizers, which (by the time I was in college) led to MIDI sequencing. So, I had a home-grown, organic introduction to synths, sequencing, and the interaction of music and gear.


I studied music at the University of California at Berkeley, and towards the very end of that started writing freelance articles for Electronic Musician. After graduation, I started cold-calling all of the music technology companies in the area; Korg R&D just happened to need someone with a combination of writing skills and MIDI/synth knowledge, and I was lucky enough to show up at exactly the right moment. Since then, I’ve been fortunate to be able to learn and grow here, in the company of an unusual concentration of highly intelligent folks.


Can you give me a brief sense of the timeline of OASYS, as it relates to important moments in Korg R&D’s history? What events led toward this design?


When I joined Korg in 1990, Steve O’Connell was already with the company, working on software-based physical modeling. His work turned into the algorithm design tool SynthKit, which we still use today. Because of the variable nature of physical models, it was already clear that the best solution for a commercial synthesizer would be based on software (running on DSPs) rather than fixed-purpose hardware (such as ASICs). At this point in time, all commercial synths used proprietary, fixed-purpose hardware, either analog or digital; this was before there were “virtual analog” DSP synths, [Digidesign] Pro Tools TDM, VST, etc. So, this was a fairly radical idea, and it turned into the OASYS project.


The first OASYS used an array of custom DSPs (since the commercial DSPs which were available at the time didn’t have all of the features we needed). We worked on it for about four years. It made some really impressive sounds, and our algorithm designers developed some really interesting techniques for physical modeling, DSP-based analog synthesis, Hammond modeling, etc. The OASYS got great responses in some preview showings, but it was turning out to be too expensive to be practical, and the project was canceled before it was completed. Later, parts of the research and technology showed up in a number of Korg products, including the Prophecy, Z1 (and Trinity/Triton MOSS), Wavedrum, ElecTribes, and the MS series.


Our group had also, fortuitously, started work on a small PCI audio card, the 1212 I/O. It was a simple thing, although it was also the first affordable multi-channel I/O. With the original OASYS keyboard canceled, we started to think about other ways of bringing the concept to reality, and using all that we had learned about DSP synthesis. So, we combined the 1212 I/O features (albeit using different hardware) with a small array of commercial DSPs, continued our work on DSP algorithm design – and that became the OASYS PCI.

As we started thinking about a follow-on product for OASYS PCI, we worked through various possible scenarios. The one that seemed like the best idea was a marriage of our software synthesis ideas and Korg’s traditional workstation concept, and that, over the next four or five years, became the new OASYS keyboard. In the course of that development, we continued to refine and expand our core algorithms, went through several year-long proof-of-concept cycles, completely re-wrote the algorithms from the ground up to take advantage of the Pentium architecture, added dynamic voice allocation between synthesis algorithms . . . many, many steps, some of them very large in and of themselves. And, that brings us to today!


I was struck by how expressive the OASYS could be as an instrument, but what do you see as making OASYS musical, or expressive? How does this figure into the design process?


One way of breaking this down is to think of three basic components:

1. The fundamental algorithm designs – oscillators, filters, envelopes, etc. Put simply, the basic sounds need to be strong, fulfilling, and pleasing to the ear.


2. The flexibility provided to the sound designers. They need to be able to make their concepts into reality, to have tools which both solve problems and open up new possibilities. This covers most of what you’ll see in the user interface: the specific programming parameters, modulation sources (LFOs, envelopes, etc.) and destinations, exotic features such as Vector and Wave Sequencing, complex control structures such as KARMA, etc.


3. The real-time control provided for the player. The architecture (both software and hardware) needs to allow sounds to respond well to real-time controllers, such as velocity, aftertouch, joysticks, ribbons, knobs, sliders, switches, etc. Moreover, the sounds need to use these features consistently. This step is critical, since it forms the bridge between the player and the instrument.


4. The sounds themselves. This is the culmination of 1-3 above; at this point, it’s a combination of careful management, ears, and talent.


With OASYS, we really sweated over all four of these basic issues. Almost all of the people who work at Korg are musicians, so we want this to be good for us to play.

For instance, the PCM and VA [Virtual Analog] oscillators, each using completely different technology, deliver very low aliasing and clear high-frequency response. Similarly, the very high update rates for the envelopes, LFOs, step sequencers, Wave Sequences, etc. mean smooth, punchy modulation, without “steppiness.” I wouldn’t underestimate the importance of this simple but elusive sonic purity; just as with other areas of audio technology (mixers, mic pres, speakers, amps, etc.), audio artifacts in synthesizers can be distracting and disturbing on multiple levels, both conscious and unconscious.


The flexibility of the system – well, let’s just say that Programs have well over a thousand parameters, and that the manual’s the size of a phone book. 🙂 It would probably surprise users to find out just how much discussion we’ve had over individual parameters and parameter ranges, and how much back-and-forth there is between the engineers and the sound designers.


In terms of real-time control: OASYS uses our top-of-the-line keyboard actions, which is a good starting point. We also made sure to provide more physical controllers than Korg has ever done before, so that the player has lots of things to grab, turn, push, and tweak. The sound designers then spend a great deal of time just on this aspect of the sounds, making them respond well to the controllers, and often using the controllers to change the sound dramatically. If you just walk up to the keyboard and play some notes without trying out all of the controllers, you’ll only experience a very small part of what the sound can do.


KARMA also adds another layer of real-time control. Much of the improvements in the second-generation KARMA are aimed directly at this area, with both more control of individual parameters and improved ease-of-use via control standardization.


The sounds are the real key. At this point, we’re probably on tens of man-years for this aspect of the project alone. An enormous amount of talent and sheer nose-to-the-grindstone effort went into the OASYS Programs and Combis. Programmers come up with an initial sound, bounce it off of other programmers who comment and make edits, which are then shared and edited some more . . . it’s kind of like a big, collaborative band, all working together on the songwriting, arrangements, and mixes. It’s only at this point that we start to hear the results of everything that came before, that we really hear the instrument sing.


What (non-Korg) instruments have you found to be most compelling expressively/musically? Did any inspire the OASYS?


I really enjoy effects processors, and have a few in my home studio that certainly serve as inspiration. I love the Eventide DSP7000’s flexible algorithms and general concentration on quality over quantity, and I love the modulation routings of the tc Fireworx, along with general wackiness of its sound designers. I love the sound of Lexicon reverbs, which certainly inspired aspects of the OASYS O-Verb. I like AudioDamage‘s approach to plug-ins as easily accessible, tweakable, funky stomp-boxes. I appreciate the cleanness of Emu samplers. I also really like [Spectrasonics Founder] Eric Persing‘s work, in general; especially with his sampler libraries, his passion and attention to detail really shine.


Thanks, Dan! It’s a pleasure to gain insight into some of these things. Readers, don’t miss Dan’s personal website, which features both stunning photography and deep resources for electronic music gear from Korg’s back catalog and beyond. -PK


Related:
Korg’s OASYS Synth: How it Was Built, Why it Runs Linux, Why It’s $8,000
Korg Adds Physical Modeling, Software Upgrade to OASYS Synth