Using Music: Brendan Baker

February 25th, 2014 | by Brendan Baker | Series:
photo of Brendan Baker

Brendan Baker

Editor’s Note: If you like sophisticated sound design with your stories, you’re living at a good moment. Digital tools and obsessive artists have converged. To reveal more secrets of this world, Transom presents another in our series on using music in radio stories. This one is by Brendan Baker of the strange and inventive podcast, Love + Radio— “a collage of non-narrated interview, electronic music, and sound effects mixed with creative production techniques. I think of our episodes both as ‘augmented’ radio stories and as music compositions that happen to contain a story.” Brendan delves deeply into the ‘chemical’ process of sound mixing, including a really deep breakdown of one of his audio sessions that will satisfy the serious students in the front row. Jay A

Scoring Love + Radio

Love + Radio is a pretty weird species of radio: a collage of non-narrated interview, electronic music, and sound effects mixed with creative production techniques. I think of our episodes both as “augmented” radio stories and as music compositions that happen to contain a story.

When I first stumbled into producing radio, I was struck by how chemical the process seemed. You take a surprising or insightful interview. You mix it with some music that conveys a certain mood or sense of energy. But if these ingredients resonate with each other — even in remarkably subtle ways — you’ve created something far more potent than the sum of its parts. I can’t say how this works, exactly. For me, scoring radio is often intuitive and based on trial and error. But I have a hunch about why it works.

People say storytelling is an act of co-creation between you and your audience. You describe. The audience interprets. And through that process of active interpretation, they turn your words into vivid scenes in their minds. The final experience is something you create together. Scoring works because it taps into that same process.

When we mix radio on a computer, it’s easy to see scoring (literally) as simply laying music waveforms beneath the “dialogue” waveforms of your piece. But of course your audience never sees these waveforms or the computer screen. They can only listen.

Influences and Inspirations

Like a lot of radiomakers interested in sound collage, I’ve been influenced by the band The Books. I love the way they compose music from disparate samples and voices, giving you glimpses of some underlying narrative. Each song is like a subconscious radio story. (This is probably why radio artist Gregory Whitehead calls them “remarkable psychic engineers” in his Transom essay.)

On The Book’s final album, there’s a moment toward the end of the song “Group Autogenics 1″ where a voice sampled from a new-age meditation tape urges:

If possible, in our modern world / listen for your eyes in your ears…


Sure, it’s a silly cut-up, but it’s also great advice for scoring radio: when you hear an interview and start to see a series of images playing in your mind, it’s a usually a good place to add music. That’s probably where the listener is seeing images, too.  

I suspect that the power of scoring doesn’t come from matching music to the words in your story, but matching music to your listener’s translation of those words into images. In other words, you’re scoring for a film playing in someone else’s head. (Of course, you can only guess at what this film looks like, so… I suppose you’re scoring to your interpretation of their interpretation of your story. Whatever.) It’s also a different experience for every listener; however large your audience may be, on some level you’re always collaborating with an audience of one.


But even if you’re scoring for someone else’s film, you still have major editorial influence. With the right music or sound design, you can frame these images for the listener so that they resemble those in your own mind. In this way, your score is also a meta documentary; your impressions and reactions to the story are recorded in your musical choices.

Challenging Conventions


I usually keep the music playing along with my tape until just before the scene ends, or the action completes, or we transition into a new idea. If you want your listener to focus on a certain phrase or idea, take the music out.  If you want to give them a moment to reflect on an idea, play some music “in the clear” for a few seconds. This is the conventional wisdom, at least.


But one of the conventions Nick and I want to challenge is this idea that radio is a didactic medium — that producers should guide the listener through every turn of the story, and explain what it all means. I think this didacticism has spread to scoring, too, when the music or sound design is a bit too “on the nose.” I understand why this didacticism is considered best practice for broadcast radio. But as a podcast, we can ask for more patience from our listeners. They can pause and rewind.  They can also listen to the piece multiple times. This opens up new creative and editorial possibilities for us. We can take some artistic risks.


So when I score for Love + Radio, I’m trying to push toward something more intuitive and avant-garde, more like The Books. (L + R sort of takes the opposite approach: we start with the voices from our radio stories and then add disparate sounds to create a kind of music.) And what I see in my head when I work on Love + Radio “looks” something like an Errol Morris film layered over a green screen music video from the 80s. (Another thing Nick and I keep coming back to is Errol Morris as a reference point.)  Whatever that looks like, that’s pretty much how I want Love + Radio to sound.

An Example


The best way for me to explain how I score is to walk you through one of my edit sessions.  We usually break our episodes down into individual chapters, and each chapter has its own Logic session.  (I’ve been using Apple’s editing software, Logic Pro, for most of my L + R work, but I also use Pro Tools, Ableton Live, and Reaper for various things, and they all have their respective advantages.) While we don’t score every chapter in the episode, each chapter can become its own kind of “song,” and themes from these song/chapters sometime reoccur and develop as the story unfolds. Here’s a screenshot from the opening chapter to, Jack and Ellen:

Screenshot of "Jack and Ellen" editing session

Screenshot of “Jack and Ellen” editing session

Here’s the audio for this opening chapter, so you can follow with the screenshot.

[The letters below correspond to the letters in the photo above.]

  • (A) At the top of the session, just below the timeline and ruler, you’ll see some colored word blocks outlining different sections within the chapter. These sections can function like paragraphs and also like verses and choruses.
  • (B) For instance, the navy blue regions labeled “blackmail” throughout the chapter all use similar music and correspond to moments where Ellen explains her get-rich-quick scheme.

    All the tape comes from a single interviewee, Ellen, but I modified her voice with a pitch-shift plugin effect on the first track to create “Jack,” Ellen’s catfishing alter ego.

  • (C) We’re introduced to Jack first, but Ellen (in pink—apologies for the gendered colors) interrupts his interview until the moment where both voices speak the same line together.
  • (D) Tracks 3-7 are also dialogue tracks, but with various audio plugins to mimic the sound of phone tape or to move the voices in binaural surround sound.
  • (E) I’ll pitch-shift and stretch these loops so they fit together in the same tempo and key signature.
  • (F) Tracks 21-23 are drumbeats.


At the top of the session, just below the timeline and ruler, you’ll see some colored word blocks outlining different sections within the chapter. These sections can function like paragraphs and also like verses and choruses. (A “verse” might be music that goes along with a story’s development, while “choruses” could be a way of reflecting on ideas or repeating a theme/motif.) Sometimes I use the same musical themes, leitmotifs, for a specific character or idea. For instance, the navy blue regions labeled “blackmail” throughout the chapter all use similar music and correspond to moments where Ellen explains her get-rich-quick scheme.


Tracks 1 and 2 are the primary interview or “dialogue” tracks. All the tape comes from a single interviewee, Ellen, but I modified her voice with a pitch-shift plugin effect on the first track to create “Jack,” Ellen’s catfishing alter ego. We’re introduced to Jack first, but Ellen (in pink—apologies for the gendered colors) interrupts his interview until the moment where both voices speak the same line together. The sound design tells us that Jack and Ellen are the same person. From then on, Ellen takes over. Tracks 3-7 are also dialogue tracks, but with various audio plugins to mimic the sound of phone tape or to move the voices in binaural surround sound.


Tracks 8-24 are all music and sound effects and this is where things get a bit more complex. Rather than scoring with a given piece of music like a typical radio story, I create loops from moments in several different songs (tracks 9-11). I’ll pitch-shift and stretch these loops so they fit together in the same tempo and key signature. Then I assemble these loops in different combinations. This lets me mix and match them modularly, making new arrangements for key moments in the story. Tracks 12-19 are various synthesizers, bass lines, and other virtual instruments to flesh out the arrangement, and I compose these sounds directly into the session with a MIDI keyboard. Tracks 21-23 are drumbeats.


I’ll also set the tempo of my editing software to synchronize with the beat of the music (134 BPM in this chapter). This lets me compose to a metronome and snap music regions to the tempo grid. Once I have a rough draft of the music, I can also use the grid help re-edit the dialogue. I pay attention to how the dialogue edit sits in relation to the beat of the music, and I’ll nudge words and phrases so certain syllables are emphasized when they fall on strong beats or gaps in the music. This creates a musicality to the dialogue. Our interviews “sing” along with the score.

Finally, tracks 25 and 26 are “bus” tracks—or two submixes of all the dialogue and music elements above. That way I can export this whole chapter as two discrete audio files, called “stems.” Here’s what the music stem sounds link by itself. I’d like to think you can make out the various turning points in the story narration even without the dialogue stem:

Then I’ll import these stem files into a final “assembly” session. Here is a screenshot of the assembly session from a different episode, our season opener, Fix:

"Fix" editing session

“Fix” editing session


Here you’ll see all the stems from each chapter in this episode stitched together. Each chapter has its own color. There are 17 chapters in this story, so to get an idea of the whole episode, imagine 17 versions of that first screenshot I shared, lined up side by side. It’s ridiculously time-consuming, and that’s part of why it takes us so long to produce the show. But I hope you can hear its value in our stories.


This kind of production is closer to what you might find in a music album and isn’t practical or even appropriate for every radio story. But I like to think that we’re all still in the wild west of radiomaking, and all production tricks are fair game — this is just how I’ve been creating L + R’s own brand of radio weirdness. So whether you produce sound art or a news features, I think the most important thing about scoring is to give yourself some room to experiment. Listen for your eyes in your ears.

About Brendan Baker

Brendan Baker is an independent radio producer, editor, and audio artist living in Brooklyn, NY. He experiments with the craft of public media using sound and music as tools for creative storytelling. As a part of Love + Radio, he received the Third Coast Gold Award for Best Documentary in 2011, and an Honorable Mention for Best Documentary in 2013.


12 Comments on “Using Music: Brendan Baker”

  • Very cool — thanks for bringing us inside your remarkable sonic+story lab, and for daring to get down into the raw, small details. Sometimes engaging with the microscopic reveals the much larger patterns. Like now. Thanks!

  • Hey Brendan!

    This is awesome. Thanks for going into such great detail here. I had no idea that logic had a binaural plug-in. Is everything you use built-in or do you have favorite 3rd-party stuff? Also could you talk a little more about how you go about selecting music and drum loops?

    Also what kind of patch do you have going on in that synth pad? Do you put those together from scratch or start working off of presets?

    Thanks again!
    Mickey

    • Hey! Thanks for the questions and apologies it’s taken me so long to respond.

      Yeah, Logic’s built-in binaural plugin is awesome! After I wrote this piece, however, Nick and I began to produce everything in Reaper, start to finish. (The reasons why we switched could be a whole other discussion, but the short version is Reaper is incredibly flexible and customizable once you get under the hood with it. I still love Logic, but Reaper makes it easier for Nick and I to collaborate remotely.) I’m still looking for a good Reaper-friendly alternative to Logic’s built-in binaural panner, though. Gimme a holler if you know about any good alternatives!

      I use a mixture of built-in and third party plugins. Logic has some very good built-in effect plugins (at least compared to Pro Tools) with useful presents. (Reaper has a ton of free plugins, but the graphic interfaces are so bare-bones that I don’t find them as user-friendly. I don’t think I would have been able to figure them out if I hadn’t learned on other programs.) I’m reluctant to recommend particular 3rd party plugins, but I like how Izotope’s software helps visualize what’s happening to the sound.

      Honestly I’m not sure how this particular synth patch started out because I rendered it as audio and then kept messing around with it. But Logic also has a ton of great built-in software synths and presets, and that’s probably how it started out. Lately I haven’t had enough time to work completely from scratch, so I usually start from presets and then tweak them or add more FX to make them feel more like my own.

      It’s hard to talk about selecting music, samples, or drum loops because that’s something I do mostly intuitively and frankly it feels like a different process every time. Oftentimes Nick or one of our collaborators (e.g. Mooj Zadie on Jack and Ellen) will include some suggested music tracks, and I’ll either use them outright or chop them up and remix them into something new. If there’s any logic behind it, I’d say it maybe goes back to thinking about music as a proxy for some kind of mental “image.” I’ll usually make a playlist of different pieces of music that seem to conjure up similar images to what I’m “seeing” in the story edit (in terms of pace, energy, emotion, sonic texture, etc.). I test a lot of different musical ideas against the edit–sometimes they totally work, oftentimes they don’t. But even having a piece of “scratch” music can help lead to new, serendipitous ideas. As with the synths, I usually try to edit, process, or mash-up different pieces of music in order to tailor the score to the story in an original way. Sometimes I spend a couple days building out a section and wind up deconstructing or scrapping a lot of it at the last minute. It’s definitely not efficient, but (right now at least) this prototype/draft process just feels like part of the work I need to do in order to get to the final mix.

  • Super interesting!

    So if I understand all this correctly, Brendan makes an export of all the chapters using 1.dialog, 2.music, 3.fx exports (stems). What I see in the second screenshot is blocks which are recut and sometimes repeated. So I presume this is another editing process and not just a matter of lining up all the chapters and mixing them.

    Not sure though why they need that extra step with the stems. Is that because maybe in the first step, the scoring part, it’s mostly done by Brendan. And the next step is when Nick and Brendan work on the piece?

    In general: who’s cuts the dialog? And how do Nick and Brendan work together? They sit together in Brendan’s studio? The have a chat online?

    Must say I love Love + Radio. For me it’s super inspiration stuff because it changes the rules of storytelling with words for me. Lucky for me. I live in The Netherlands so I only use Dutch words. So nobody will understand a flying F when I steal all those tricks from them guys…

    :p

  • Forgot one thing: how much time does it take for you and Nick to make an episode of L+R?

    • And to Marco:

      Yes, exactly. Part of the reason I’ve exported the chapters like that is so they can be easily re-edited and adjusted at the final “assembly” stage. So in the case of “Fix,” you can see how we cut up chapters C (in blue) and D (in green) to alternate between the “mafia” story and the “love” story. So yeah, we export the stems for mix, editorial, and collaborative reasons.

      Since we’ve started to work exclusively in Reaper, I’ve actually tried to do the entire show in a single Reaper session–but with mixed results. (Here’s an image: https://pbs.twimg.com/media/BbOY5mnCQAE0xki.jpg:large) On the one hand, I like having all the tape, music and SFX regions “out on the same page,” and being able to easily move/repeat ideas across the whole story. On the other hand, having one session per chapter encourages me to focus more deeply on individual chapters. Also, Reaper makes it really easy to import/copy things between two different chapter sessions–not just edited tape but entire tracks, grouped folders of multiple tracks, plugin settings, almost anything, really. I love that.

      Our process changes a little bit for every piece. If we’re working with a collaborator, they’ll usually do the first few passes of dialogue edits with feedback from Nick and me, but at some point we’ll take over. Here again, Reaper lets us pass edit sessions back and forth pretty easily. On the past few episodes Nick has done more of the big picture/structural edits and I do more fine edits along with the sound design, but we both get to have our hands on the tape and talk through our ideas throughout the process.

      Towards the edit of production, the scoring/sound design takes over and it’s usually easier for me to make all the changes on my end. But Nick and I have a system for feeding the mix through video chat, which allows him to see my screen, make tweaks, and do a final “listen through” together before releasing the episode. Here’s a pic of our final mix on Jack and Ellen–you can see the Logic “stem” session, a script in Google docs, and a Google hangout session with Nick and Mooj: https://pbs.twimg.com/media/BDj84FQCEAAHiad.jpg:large

      Unfortunately I don’t have a great answer for how long it it takes to produce an episode. Jack and Ellen was in production off and on for about 10 months, and I think the bulk of the scoring and sound design took about a month. But that’s not a typical episode, either. I was talking with Kaitlin Prest of Audio Smut (who has a similar approach to scoring and production) and we came to the conclusion that–in the perfect world–we’d have at least a day to score every two minutes of scoring. We can dream, right?

      • Thanks a lot Brendan. The way you work with Nick online with Google hangout and docs is really interesting.

        The binaural effect sounds interesting too. I’m not a Logic user, I only use Ableton Live (+ Max for Live) and Propellerhead Reason. I’m not sure what exactly it is doing but I presume it puts an audio source in an exact position in the stereo image (front and back). I hear you pan a lot with the voices which sound different than “standard” pans.

        Maybe you can build some sort of binaural effect in Reaper as well. I guess the binaural effect in Logic uses EQ and delay to create that 3D sound. EQ is a phase shift (using a digital delay, see http://ethanwiner.com/EQPhase.html). In Ableton it’s rather simple to build your own binaural panning effect. See for example http://seanny.net/renzutools/ That patch uses the trick to use low-pass filtering on panning. And (optional) a very short extra delay. I guess with EQ, delay and panning you can do all these kinds of binaural effects but maybe the Logic plugin makes things very easy for the end user. Next time I’m able to use Logic I might try to figure out what’s it doing exactly.

        I will try some of these panning ideas myself. I mostly use a Sony D-50 for interviews using wide stereo-panning on the mics while recording (so it can capture 2 voices at the same time without moving the recorder all the time) and later on I narrow the stereo field which makes it more mono sounding. Maybe binaural tricks (using low-pass on the signals plus panning) might be even better. Thanks for bringing that up!

  • Jeff Emtman says:

    Two thoughts and a question:

    While I’m not aware of anything as simple as Logic’s binaural panner in Reaper, it is very easy to set up Moore-style panning.

    To create a natural sounding right pan:
    1. Create an instance of ReaDelay
    2. Add two taps.
    3. Hard-pan one of the taps right, leave the delay at 0ms
    4. Hard-pan the other one left, give it a delay between 1 and 9ms depending on your preference.
    5. Make sure feedback is set to -inf DB for both

    Second, I certainly agree with you about the lack of plugin UI being an issue in Reaper. I also wish that they would remove or combine the mediocre plugins. However, there are some gems in there.

    I’ve found that LOSER/Stereo Field is a surprisingly helpful native plugin. Further, if you go ahead and download the free MDA plugin pack, there are some good effects there, too, including an auto-oscillating round panner. Last, Voxengo has a very simple and effective Mid/Side encoder/decoder called MSED that is pretty wonderful.

    Okay, here’s my question(s). When you’re mixing the show, how are you metering? Are you aiming for -6, or something else? Do you have thoughts on K-style metering? How much dynamic range do you aim for, and in what listening environments do you test the shows before they’re released?

    Thanks Brendan.

  • Mary Jane says:

    Thank you.

Links to “Using Music: Brendan Baker”

Leave a Comment