Sound Design: Haley Shaw

photo of Haley Shaw

Intro from Jay Allison: Fans of intricate sound design are living at the right moment. More and more podcasts list "sound designer" in their credits - audio wizards who are part composer, part Foley artist, part master mixer, part mood maker.

We started a series of features on sound design to look at what exactly it is and how it's done. This one is by Haley Shaw of Gimlet podcasts, including The Habitat. Haley says, "At its best, sound design bridges understanding, enhances the impact of the story, immerses the audience, and explores new ways of communicating. But even when going for goosebumps, the intended feeling should emerge from the story, not the design." Haley offers everything from questions to contemplate at the beginning of a project to practical tips for sound designing specific moments. She covers world building, stacking sounds, effects tricks, and the software she uses. All very useful.

After you read Haley's feature, check out the others in the series. You'll find them here: https://transom.org/tag/sound-design/

An Intended Feeling

There are so many processes for manipulating sound, but the way they all make a listener feel is a unified effect. So, I like to define “sound design” as the way to shape an intended feeling through sound and music.

At its best, sound design bridges understanding, enhances the impact of the story, immerses the audience, and explores new ways of communicating. But even when going for goosebumps, the intended feeling should emerge from the story, not the design — a lot of the work lies in setting aside ego and muting that super gnarly thunder sound effect or incredible (read: incredibly distracting) drum pattern I love so much that just doesn’t belong.

Most of my work consists of post-production sound design and creating music, so I’ll be addressing those skills here. However, my process for any project starts long before getting my hands on the audio or automating some parameter just right. Here I’ll explore how I get from vague conceptual notions to making acute decisions on what gets included in the final product — all in service of an intended feeling.

Macro To Micro

For each new project, I start with the big picture before targeting smaller details. I create a macro concept based on the overall tone of the project and then use it to inform my decision-making process right down to the smallest choices. Along the way, I inevitably deviate from initial ideas that aren’t working, but starting with a plan focuses me and leads to a more cohesive sound. Plus, who doesn’t love to answer a bunch of hard questions before they even start?

What is the sonic identity of the project? Who is it? What is its overall character? Is it earnest? Adorable? Nightmarish? Contemporary? Heavy? Is it listener-friendly, or is it trying to catch the audience off guard? If it is branded content, what is the brand?

How should the intended audience feel about the project as a whole (not from moment to moment)? Who is the intended audience? How do they feel coming away from the project? How would they describe the tone of the project to someone else? How does the project sit with them?

What is the intended effect of my music and sound design within the project? Subtle and nuanced addition to the narrative? Splashy showstopper? Billboarding or propping up the form of the piece? Invisible smoothing and cleanup work? What is the point of my work?

These questions are best answered when communicating closely with whoever holds the artistic vision for the project (show runner, producers, director, host, writers, etc.). Whether I pitch my concept to this person or we create one together, getting their input up front keeps me from having to scrap work later.

Specify the Macro Concept

Once I jot down some answers to those questions, I brainstorm with more focus and apply possible action items to the concept. I also start asking relevant logistical questions: What will I need studio time for? Will I need to hire instrumentalists or contractors? How much Foley or location sound is necessary? How much time will each stage of this project take me?

Example: Say I’m working on a project and the sonic identity is heavy in tone, but we want the audience to feel excited after listening. So, I see an opportunity for sonics to re-contextualize heavy feelings into energy, like action music does. I’ll do a spin on action music, but the project is set underwater, so I’ll work in a choppy submarine engine or bubbling, which could pair well with deep drones, which could come in whenever the main character is feeling conflicted, which could increase more often throughout the series . . . so . . . we need to get some bubbles.

Create Rules

At this point, I like to devise vague rules by asking questions about what elements define the world of the project.

  • What world is the narrative taking place in, and what are the rules of that world?
  • What music or sound design is used where? Where is scoring? How is it used?
  • How do we transition from one scene or section to another?
  • Is the music and sound design dense or sparse throughout? When and why?
  • What is the general pace of the sonic content?
  • Which sounds are diegetic (meant to be heard as within the story), and which are non-diegetic? This is especially useful in pieces that have a host or narration.
  • Are there any large set pieces or musical moments to plan for?

Music: Left of Center

Before composing any music, I’ll acknowledge the most obvious idea that arises. Usually the most straightforward direction to take is already a trope, and tropes are easy for audiences to sniff out as tools of emotional manipulation, rendering them less effective. I like to acknowledge the trope so that I don’t actually do it; rather, I prefer to make decisions that orbit around it. To take advantage of a trope’s connotations without losing a more discerning listener, I’ve found success in two strategies: (1) go left of center (e.g., use the fringe ideas surrounding the trope, not the trope itself); and (2) use the trope but make it weird (change its context, use unconventional effects on it, shorten or lengthen it).

For example, The Habitat: a series chronicling a science experiment in service of traveling to Mars in which six humans live together in a secluded dome for a year.

Rather than using the most obvious idea, space — the sonic trope being beep boops, digital music, and synthesizers — the score leans harder on the other aspect of the narrative: humans.

Here is a screenshot of some very early notes I took in a meeting with host/producer of The Habitat, Lynn Levy, who was a huge part of creating the concept for the score.

Early score notes for The Habitat.

I set out to make the score feel warm, human, close, and real. However, I tried to include some ideas inherent in space travel that weren’t so readily cosmic (unreal, strange, otherworldly) in hopes that the end result would be a balance of both, matching the narrative. Here is an example from the soundtrack that represents these intentions in practice:

From those early notes through to the soundtrack, a lot stayed the same on the human side of the sound world; I used close mic’d strings through effects that rendered them unusual, included human body sounds (e.g. hand movements as drum samples, whistling), and tried to keep the tone warm, even through heavy digital manipulation. “Music moves in unexpected ways” was achieved by using odd time signatures and mixed meter. However, you’ll notice I did not use “arpeggiators” at all in the final product; I’d introduced them in early drafts, and they ended up sounding far too sci-fi, too close to the trope.

World Building

When choosing specific sounds, techniques, instrumentation, or music cues, my approach depends on the narrative intent of the writers, editors, and producers. Sound design should guide listeners through the narrative: highlighting the important things in the right ways, conveying the point of view each moment is experienced through, and contextualizing that point of view. Here are some of my go-to techniques:

Stacking Many Sounds

I like to stack sounds even if I’m trying to build the sound of one object (like one bottle breaking). Sounds in real life are complicated and are never exclusively made up of the words we use to describe them. So, a bottle breaking could also include shards hitting surfaces, possibly liquid spilling, the sound of what it feels like to be surprised and ashamed that you broke a bottle, or a million other details that would depend on the context.

If I’m gathering standard sounds from sound effects libraries, I start with Soundsnap. It has an easy search function, and if I need a ton of something more specific (birds in the jungle! or whatever) I’ll want to use a more focused library anyway. I almost never use a sound effect alone, as downloaded, so the sound effect itself doesn’t have to be perfect. If it isn’t horrible, I download and try it. Most of the time, I’ll stack 2-15 of the same sound over each other, depending on the desired outcome. I like to create Foley for anything I want more control over, especially if I want something super close mic’d. If I’m doing naturalistic sound design in a reported piece, I want the raw tape / long handles, room tone, and ambient sound from that time and place, and lots of it, so that I can sculpt smoothly around the interview tape used in the piece.

Putting a Bunch of Effects on Them

I’ll almost never simply put a sound effect into a session and move on. After stacking, I’ll mess with them to create my desired impact, or as much as my macro concept calls for. I’ll add delays, reverbs, panning, dynamic, and additive effects (individually and/or buss them and add effects across all at once). Automating parameters on plugins keeps them feeling unique, pushing energy and tension in compelling ways. My favorite plugins to use for additive and creative effects are the entire Soundtoys suite, Goodhertz (Lossy, Trem Control, and VulfComp), Valhalla (Shimmer and Room) and FabFilter everything on everything. For cleanup, I use iZotope RX 6. And, frankly, the Avid AudioSuite Reverse gets a ton of use both in my music and sound design.

Moving Them Around the Stereo Field

I like to pan things everywhere. To create ambience, I’ll often layer 4 to 6 constant sounds over each other and autopan 2 to 4 of them very slowly on either side of the stereo field so that they subtly move within a small range, since it subconsciously gives the listener more sonic information to place. In the foreground, I like to use the stereo field as a way to grab attention. In music, moving instruments around in relation to where the dialogue sits (frequently center) can help keep music and dialogue from competing.

An example of these three techniques at work: if I want to create an exaggerated bottle breaking, I might find or record not 1 but 10 suitable bottles, stack them and apply any other contextual sounds, use plugins to exaggerate certain frequency ranges that feel the most “bottle” and “breaking” to me, and automate their panning outward to make it feel like a bigger moment.

Here is a sound effect of one bottle breaking:

Download
Listen to “One bottle breaking”

Here — all the stacked bottles breaking, without effects:
(Notice how already it is less noticeably a straight-from-the-package sound effect)

Download
Listen to “Stacked bottles breaking, no effects”

Here — all the stacked bottles breaking, plus some context (swing, thump, and high tone added since the subject gets hit in the head):

Download
Listen to “Stacked bottles breaking, plus some context”

And here — all of the glasses breaking, plus context, with effects and panning:

Download
Listen to “Stacked bottles breaking with everything”

Signposting

I use certain sounds as signposts to hint at where we are, what is happening, and how to feel about it. This is especially helpful in audio-only mediums where the audience has no visual cues. I tend to use an identifiable signifier sound. Great places to insert signifiers are: between a word, at the end of a paragraph or section, or to start a scene. In music, a signifier sound can be more of a tonal shift — perhaps the start, end, or post of a music cue, a swell, or an audio sting with a particular tone.

This example, with dialogue stripped, starts a scene at a pool party with a splash to suddenly transition the listener from exclusively narration into full sound design; the splash places them in the scene of the party. It also ends with a more surreal splash that was used to move the listener to a different location.

Download
Listen to “Pool splashes”

Realistic Sound Design

Though eccentric sonic ideas can be hard to create, it can be just as difficult to build “normal” or “realistic” sound design. To build realistic sound beds, I stratify sounds in layers.

Background

First, there is background sound that is just simply everywhere. In the real world, we rarely hear actual silence, which is why it is so powerful in digital mediums. To source background sound, you have to imagine the contours of your location and fill in all of the sensorial elements. This is easier in film, since you have a visual; in narrative audio, it helps to do this layer first so that you do the work of imagining and understanding the room. What do you see in the foreground? In the periphery? What materials make up the space? A shag rug, a tiled floor, a densely-populated area — all of these will define how sound moves throughout the space. How big is it? Is it cold, wet? Dirty? Might someone in this space hear wind brushing through tree branches, or an air conditioner whirring, or (my favorite mixing engineer’s nightmare) the hum of a refrigerator?

Middle Distance

Middle distance spans everywhere from the immediate area of focus to the background. Here you’ll find dogs barking, bicycles whizzing by, people conversing, forks clinking on plates in a restaurant. I like to layer as many of these sounds as possible, placing them at different distances and using the same style of reverb in different amounts (or just use how they were recorded). So, if there is a dog (read: joyful corgi) barking, I’ll pan the dog running around the middle distance because that’s likely what he’d be doing (because he is playing joyfully).

Close or Scene-Specific

There are close sounds (your own fork on your own plate) or scene-specific sounds (one character smacks another in the face). I like to add these sounds last. If I need to create Foley for them, based on the other layers, this gives me that chance. Plus, these are the sounds most likely to be changed throughout the production process by a writer or producer. For scene-specific sounds, mixing, timing and pacing are almost more important than the sound itself, as these have the most risk of sounding canned.

Energy

These are sounds that are not necessarily an identifiable thing, but are added to any type of sound design, realistic or otherwise, in order to build and quell energy. I like to use swells to do this, usually something I custom make per project, but this can be accomplished with mixing practices, music editing, and strategic sound editing, among other techniques. An example of this is to slowly ramp up the mix/wetness on bussed effects in order to build surreality slowly.

Every Skill Informs Another

We use a ton of different skills to design sound (Mixing, Mastering, Composing/Producing Music, Scoring, Sound Editing, Foley, ADR, and so on). Since these aspects all work in tandem to create an emotional effect, they can be especially compelling when the lines between them are smudged. They also inform each other; mixing is often musical in practice, sfx editing can be rhythmic, and so forth. Putting these aspects in conversation with one another leads to different ways of supporting, contextualizing and adding perspective to the narrative at hand. These skills give you tools for communicating your sonic imaginings to your fellow storytellers and to your audience. With so many possibilities for designing sound, there are infinite ways of bringing yourself to a project to enact a unique and memorable vision. All that is left is to try.

Haley Shaw

About
Haley Shaw

Haley Shaw is a sound designer, music composer, and audio engineer based in New York with Gimlet Media. She sound designed, mixed, and wrote the original score for The Habitat. She’s created sound design and music for a number of Gimlet series, including Mogul and the inaugural season of Heavyweight. She’s created original scoring for the podcast StartUp and worked on Peabody-winning series Uncivil. She’s made custom music for brands Reebok, Adobe, Audible, Prudential, and Squarespace. She also writes music for films: most recently, she scored the feature When We Grow Up (awarded “Best Feature” by Indy Film Fest). Her sound design has been called "sterling" by The Atlantic and "on the money" by The Guardian. You can find her at www.haleysounds.com

Comments

Your email address will not be published. Required fields are marked *

*
*
Website

  • Martin Haswell

    8.23.18

    Reply

    One of the best, clearest and most interesting features that I’ve seen (and heard), beautifully thought through and presented. Have been through the whole piece, and the audio examples, several times and has made me put much more thought into my own work.

Your email address will not be published. Required fields are marked *

*
*