Ben Minto: Audio Director - Battlefield, Medal of Honor, Mirror's Edge
About Ben Minto:
Ben Minto is an Audio Director/Sound Designer working out of EA DICE’s Stockholm studio. Over his 12 years working in the games audio industry, he has accumulated vast insight and knowledge into all aspects of field recording, sound design, and production. Ben’s credits include audio direction and sound design for video games such as BLACK, Burnout (1,2,3 and Revenge), Battlefield (1943, Heroes, Bad Company), Mirror’s Edge and Medal of Honor. With his commitment to quality audio design which pushes the boundaries of sound, it is no wonder that Waves processors are the “backbone of his rig.” In this interview, Ben gives us an in-depth look at the world of sound for games.
How did you get into sound design for games?
Unlike a lot of my peers, I have no background in music. The fact that I have never been in a band of any sort and also my inability to play any instruments comes as a shock to some people: "So how did you end up where you are then?"
My university education finished at the Masters level, with a final year course in Computational Fluid Dynamics, which is a combination of mathematics (my first degree), engineering, and computing. During my time at university, I developed a sideline hobby, which was also a source of additional funds, namely buying, sometimes fixing and, if I could bear to part with them, selling old analogue synthesizers.
At the end of my education, I had a few options open to me. Staying in education wasn't too appealing, so I reviewed the career opportunities out there: Engine and wing design for the aerospace industry, CFD modeling for F1 Teams, fluid flow analysis for a hydroelectric company, or a low paid job in London repairing and selling synthesizers full-time. I chose the latter.
The synthesizer job lasted for less than a year, but during that time I relocated to London, learned how to operate and install Pro Tools systems, and got to know my way around most of the large London studios. I subsequently took a position configuring and selling Pro Tools rigs, which quickly moved on to an installation and support role. This role introduced me to a lot of people and also the workings of the many different media industries: music, film, TV and games.
What was your big break?
While working as a Pro Tools tech, I had many opportunities to try out small freelance roles in all the different industries, and while each had its own merits, I was drawn to the games industry. I think the fact that it was still a fairly immature industry with a lot of rapid change and progress being made, and also the fact that it weaved together both my audio passion and my university education, that initially piqued my interest. The themes for a lot of the titles were 'fantastic', for example, large guns, monsters, and fast cars instead of the more mundane themes found in the more traditional sound to picture roles. All these exciting and challenging possibilities drew me to it.
I almost missed my big break, as at first I thought the offer was made in jest. One of my clients was head of audio at Acclaim Studios in London, and while fixing one of his machines, he inquired about my background. I reeled off stories of fixing old equipment, recording sounds to DAT, and then manipulating them. He pointed out I was a sound designer, a term I wasn't so familiar with, and suggested that I should go work with him. At the time, it seemed just like a friendly passing comment, but thankfully, 3 months later, he returned for a system upgrade, and also with a formal job offer. And that was my first job as a sound designer in the games industry.
What projects have you been working on recently?
Battlefield 1943, Medal of Honor, and Battlefield Bad Company 2. Currently I'm fully involved directing one new project and additionally working on two others.
Tell us about the role of a sound designer on a game project.
Part of the appeal of working in this industry is the very loose definition of which areas each role can encompass. A games sound designer can handle sound recording, Foley recording, sound editing, sound design, implementation, linear track laying for cinematics, mixing, VO (voiceover) recording, dialogue editing, music editing, mastering, debugging, and so on. We can also step outside of the purely audio role and get involved with design and implementation of animation or particle effects, or even camera shake and pad vibration, for example, thereby delivering a tighter synergy between the audio and visual components.
An area or feature is given over to the sound designer by an audio director, for example the ambiences for a given game level. The sound designer then starts to make the content for this, using a combination of commercial library material, previously recorded material previously, as well as material recorded specifically for that project. The process for making the content is generally completed inside a multi-tracking DAW, with lots of editing and plugin usage.
These assets are then prepared for the game: naming convention, looping regions if needed, mastering etc. Then comes the part of the role that differs from most other industries—implementing it all into the game.
All games are built using an engine, and typically this engine has a dedicated audio component. These audio components come in many different flavors, just as there are many different DAWs available. In simplified terms, the sound designer creates the equivalent of an interactive synthesizer in software, along the lines of a Max/MSP or a Reaktor patch. These patches take the relevant parameters from the game, e.g. user input, the environment and positional information, parameters for variables like speed, distance, and angle, then uses these to modify the sounds playing in that patch. The patch can detail, for example, when to start, when to stop, when it should loop, how loud it should play, its position, how its should change relative to the listener, runtime DSP or obstruction/occlusion effects, etc.
For a single title, many of these patches have to be written, with each one modeling the sound for different sections of the game: each weapon type, each environment, each type of footstep on each type of surface, etc. Once we reach a certain mass of these patches, we can then start to mix the game, by creating additional logic over the top of these patches which then controls the relationship between patches. Simple rules can include "If X is playing, don't play Y"; "If we already have 5 instances of X, do not start anymore instances of X"; "When X happens, modify all these Y patches, so that X take center stage" and so on.
How long does it take to sound design a game project?
This depends on a lot of factors. A “triple A” title usually has between 12 and 18 months development time. Smaller titles can be squeezed into 3 months; sequels are generally released on a yearly cycle, so they can have around a 10 month development cycle. Another variable is the size of the audio teams working on the project, and these can be formed by a combination of in-house staff and external contractors.
Teams can be formed of an individual, where as a typical “triple A” team will consist of one audio director, two additional sound designers, one VO producer/dialogue editor, and one dedicated audio programmer. Additional sound design resource, music composition and recording, and dialogue recorders can be handled by external talent.
How do you go about collecting the source material for your sound design work?
Right at the start a project I collect a wide range of reference material that initially feels like it may fit the project. Then by working through this material and trying different combinations, I filter through it all, usually disregarding quite a lot in the process, and eventually arrive at a collection of materials that sets the audio direction for the title.
Where you gather reference from is down to individual. It's very obvious to cite examples from other film, television or game productions, but try going beyond this; read about the topic or talk to individuals who have experienced what it is that you are attempting to portray. Or, if you are lucky enough, go out and experience for yourself the event that you are trying to recreate the audio for, making sure you also have a recorder with you.
Once you have a solid vision, then you build up your source library from which you will complete your sound design. The first step is usually to prioritize which sound features are the most important/impactful to your product. As a general rule, I try to keep this to three key areas, to really keep the focus tight.
Then you build a plan to acquire the source material you need, firstly by reviewing your existing material and then secondly obtaining new, relevant source material. The new material can be obtained by buying additional library material, employing others to go out and record it specifically for you, or the most satisfying approach—to go out and record the material yourself. As well as this being the most fun approach to obtaining new material, the experience of hearing the sounds for real, and seeing how they are created, how people react to them, how they develop in the environment and a lot of times how loud they really are, is priceless knowledge once you are back in the studio and trying to replicate not just the sound through a pair of monitors, but also to encapsulate the emotions and experience associated with that sound.
How do you know which processor to pick for which sound?
You start by learning your tools, whether this is by being taught, watching and listening to others, by reading the manual, or by teaching yourself through trial and error. From this you build up your own mental guidelines for which processors work in a given situation. This forms the basis of your core knowledge, which you can always refer back to, but which should always be extended, improved, and built upon.
When time is short, knowing how to achieve a given result is priceless; however, when deadlines aren't so tight, try something else. Don't use your go-to plugin, try something new. Sometimes it works and you find a new angle, many times it doesn't and you stick with what you know, but by doing so you learn something new that may be applicable in the future.
What was the most inspiring moment in your career?
I can't say there was one particular moment that sticks out; there have been many. Most of the highly inspirational moments occur when working with other talented individuals; even just discussing an area with another person is such a fertile environment for great ideas to form and amazing things to happen.
Do you think digital tools help make the process faster?
If you are comparing the same task from say 10 years ago to today—definitely. The thing is, the process rarely stays the same, and what you end up doing is utilizing this extra power and efficiency to do even more.
When did you first discover Waves?
When I first started to learn Pro Tools systems. At the time it almost felt like cheating; even then you could tell that this would become a serious contender to all the racks of hardware found in every studio.
How does Waves fit into your workflow?
It's just there. I don't really see it as separate from whichever program I'm working in. Once it’s installed, it just becomes part of the workflow. If I ever have to work on a system and it's not there—that's when I notice.
What are your favorite Waves tools?
Waves Mercury is a key part of my arsenal, and for me for me being able to deliver quality audio time and time again, it is as essential as it is to have a quality monitoring system. Whether it’s repairing audio, adding effects, using plugins in a creative manner or mixing, Waves has always been there for me.
The Restoration plugins have been a lifesaver on numerous occasions, and not only in their traditional roles, but also for isolating additional artefacts and harmonics as another source element.
The WNS Noise Suppressor has changed the way I view fixing audio. Previously I used to have to stop mid flow, fix the audio, and then pick up where I had left off. Adding an instance of WNS to a channel is now just another part of my work flow, just like adding an EQ.
The Waves processors’ ease of use is the key to their power for me. The ability to use them in a very modular way and quickly chain 3 or 4 of them together to carve and weave a sound as needed is a very instinctive part of how I work.
Having TransX run into Doubler run into MondoMod into an instance of S1 is a go-to chain for me when working with a track of very defined gun Foley during a burst fire fight. Here there can be a lot of sound going on, but we must preserve the closeness of our own weapon firing during this. Having delicate and distinctive Foley for the gun mechanism is one way of ensuring that your gun sound is kept close to you.
Beyond traditional effects, I’m a huge fan of using non-literal sounds as impulses inside IR1, be they impulses recorded in non-typical environments or any random sound that might just be interesting and has the right feel. IR1 is a great impulse response reverb, and equally as powerful as an experimental effects unit. Firework and gun recordings can, with a bit of editing and manipulation, be the basis for building your own great outdoor impulses.
LoAir has also become a new favorite. As well employing it for its legendary subharmonic processing, I’ve taken to also using LoAir as distortion unit, usually followed by Linear Phase EQ to re-tame the bass, whereby I can achieve a controllable bubbly organic layer within a session without over-saturating the low end.
Finally L3-16 has to be my mostly widely used and abused plugin, and is as commonly found on my inserts acting as an effect when I need to get that extra punch out off a single element, as it is found on my master.
Do you tend to mix hotter these days so the result is competitive in terms of loudness, or do you try to leave dynamic range?
Final mixing of interactive elements, which is the majority of a game, is completed at runtime as the game is played. We are fortunate here at DICE to have created a system which employs an end stage mixing solution called HDR (High Dynamic Range) mixing. This allows us to define a series of dynamic ranges that match up to a series of target playback systems. The target playback system is defined by the consumer, be it headphone, TV speakers, hi-fi system, or home cinema. We also have a special effect state called "War Tapes" which mimics the very heavy compressed and distorted feel of video footage recorded out in the field. For each of these settings, we define a window of dynamic range in dB, (which is smaller for TV speakers than home cinema), a compression setting (TV is mildly compressed, home cinema is not), and a master EQ setting (TV has bass cut and a high boost, home cinema has no master EQ applied.) You can read more about the HDR system here.
When we deliver a title, we can in effect deliver a whole range of mixes from which the consumer can then choose which one best suits their system and their own preference. We ensure that at the two ends of the scale we offer a very compressed, extremely loud, YouTube-esque experience (“War Tapes”) and then a mix with filmic amounts of dynamic range to be played back over a home cinema setup. As with most games, the user can also determine the mix between music, sound effects and dialogue, which for our titles is then fed into the HDR system to deliver a mix that satisfies the consumer's expectations.
Do you use delay or reverb on voice?
Both. On the original assets, I generally burn in a very light reverb just to smooth the recording out a bit, especially if the dialogue was recorded in a room and needs to be outdoors, for example. Generally, most of our dialogue takes place outdoors, so instead of recording in the studio, we record out in the field to get an authentic sound, along with all the 'dirt' and 'life' associated with recording outdoors. To this we add a very, very small amount of IR1 reverb, usually from an impulse recorded at the session, just to add a level of consistency to all the recorded files.
At runtime, we employ DSP to add delay based on the dimensions of the environment and the distance between the listener and the source, and then also apply, on a separate aux, a convolution reverb chain depending on the location of the listener and the source.
What makes great a sound designer?
A person who, as an individual or as part of a team, can consistently deliver a compelling and engaging experience, which supports and enhances the title.
Do you have any advice to offer an aspiring sound designer?
It's daunting, the quantity of candidates that apply for new sound designer roles these days. For a single position, we recently had almost 200 applicants. However, it was easy to narrow that down to around 20 individuals by reviewing their show-reels, education, and past experience.
To get that lucky break these days, you have to be good; in fact, you have to be very good. You must know all the basics inside out and be extremely proficient in using your tools. There are a few core programs to learn your way around, and everyone will expect you to know your way around the Waves plugins—everybody uses them daily—so make sure you use them regularly and feel confident in using them. You will be expected to hit the ground running, and your core learning and abilities should support this.
And that is just the expected level of competence; to get your foot in the door, next you have to impress. The best examples I have seen of this are individuals who go beyond the basics, developing their own unique methods and style, and those that have in some way developed specialties, be this in their use of their tools, recording their own source material, or by being breathtakingly good at working on a specific genre (for games, this could be a horror game, a shooter, a racing game etc).
The best advice I can give is probably to keep learning, keep pushing yourself and developing your skills and your ears. The best way of learning is by doing, and if you find the opportunity to work for or alongside someone in the industry—grab it, and then listen and learn.