Up to this point, everything we’ve discussed has been about music in the abstract. The pure “music” of music.
We defined the four previous dimensions as “the dimensions of the music”. So why are we adding a fifth?
In reality we generally experience music through some kind of real-world rendition. Although it is possible to look at a piece of sheet music and use Audiation to hear in your mind how it would sound, the music you’ll generally be practicing Active Listening with will be either:
- A live performance, or
- A recording of a live or studio performance which we are playing back.
In these cases there is a whole other realm of listening to consider, relating to the real-world characteristics of the sounds being produced.
This can loosely be referred to as the “Audio” side of music. We’re talking about the equipment, the speakers, the room environment, the electronic processing that might be happening. All the factors that affect the way the sound is produced or perceived, beyond the fundamental musical intentions we’ve discussed so far.
You could take the same piece of music, played with the same instruments, pitches, rhythms, form, and so on, but play it:
- Live in a huge, empty concert hall.
- Live in cramped basement bar full of people.
- Live in a professional recording studio.
- From a CD on a nice home stereo system.
- From a low-quality MP3 on cheap in-ear earphones.
Can you imagine how different the music might sound in each case? Now imagine being able to hear, understand and explain in great detail how and why each of these scenarios sounds different. That’s what the Audio side of Active Listening can empower you to do.
The practice of Audio Active Listening is no different than that of the “musical” side: we are listening with a question in mind. In the case of Audio, these questions can roughly be divided into two groups:
- Questions about audio frequencies
- Questions about audio effects
Although the process is the same, these Audio questions wake up quite a different part of your brain and ears, so even a little practice in these areas can open up a whole new level of music appreciation for you.
As you heard about in the Introduction chapter, audio Ear Training and what I called “critical listening” back then (but would call Active Listening these days) was the gateway to all my exploration of musicality. It showed me how much my own awareness, appreciation and enjoyment could be increased, with even a small amount of training. Even today, I would credit the Audio Ear Training and Active Listening I did back then as a huge part of what lets me hear and play the way I do.
Now I realise that for a lot of musicians, many of these audio concepts will be totally new. Don’t be put off. You don’t need to understand these things fully to start listening for them and building up your awareness and appreciation of what’s going on in the Audio side of the music you hear. And I think you’ll thoroughly enjoy peeking your head through this door, just like I did.
We’ll tackle each of the two types of Audio question in turn: audio frequencies, and then audio effects.
Audio Frequencies
We aren’t going to go deep into the science of sound here—and fortunately we don’t need to! Here are the two things you need to understand to make sense of audio frequencies:
1. Every sound we hear exists as a pressure wave in the air which reaches our ear.
The air molecules are becoming more densely packed and less densely packed over time, in a gradual wave back and forth. Our ear detects that changing pressure and that’s what our brain interprets as sound.
This change in pressure can happen at different speeds. We call this the frequency of the wave.
The pressure could be changing very quickly, which we call a “high frequency”, or very slowly, which we call a “low frequency”. We measure frequency in a unit called Hertz (abbreviated “Hz”). A low frequency sound might be at “20 Hertz” while a high frequency sound could be as high as 20,000 Hertz. One thousand Hertz is called one “kilohertz”, so you might also hear that referred to as as “twenty kilohertz” or written 20kHz. This is the human hearing range for frequency: from 20 Hertz right up to one thousand times that: 20,000 Hertz.
2. Every sound actually consists of a combination of multiple frequencies.
It is possible to generate a single-frequency sound (called a “sinusoid” or “sine wave”), but in nature and with musical instruments there will always be a number of frequencies present. The exact mix of frequencies determines both the perceived pitch of the sound and a large part of the timbre of the instrument.
For example, the note an orchestra tunes to, called “A440”, is the note A, in the octave (or “register”) where that A is at 440 Hertz. Every instrument in the orchestra will produce a sound where the most prominent frequency is 440 Hertz and that’s the pitch we will perceive the note as.
In reality though, every instrument will also be producing a number of other frequencies, most notably harmonics which are multiples of that “fundamental” frequency. The balance of these other frequencies is a big part of what makes each instrument in the orchestra sound different i.e. have a different timbre.
So clearly pitch and frequency are related: both describe how high or low a sound is. We can think of “pitch” as meaning “the single measure of how high or low we perceive a note to be” while “frequency” is all the rich detail which both determines that single perceived pitch and has a great influence on timbre.
So why would we be interested in listening for frequency, if pitch already tells us about the highness or lowness of a note?
Well, there are some highly practical purposes of developing your ear to appreciate frequencies. For example, live sound engineers and studio engineers will learn to recognise the balance of frequencies in great detail which allows them to adjust dials on their mixing desk to improve the overall blend of the sound, a process called “equalisation” or “EQ”. This is a crucial part of what lets a musical group produce a clear and coherent sound together with a sound system when performing live, or on a recording.
However, for our purposes as active listeners, the major benefit is that it opens up a completely different window for us to listen to the same music. We learn to hear what’s happening in the various frequency ranges of music.
This is a very different question to that of pitch. With pitch we might ask “Which instruments are playing high notes right now?” but thinking in terms of frequency we would ask “Which instruments have a presence in the high frequency range?”. And these two questions would produce different answers!
For example it might only be the singer who is actually producing high notes at a given moment—but actually you realise that up in that high frequency range you’re also hearing the drummer’s hi-hat cymbal, some of the echo-y reverb of the guitar player’s riff from a moment before, and even the top part of the timbre of the bass guitar, even though the notes it’s playing are way down low.
How to Start Listening for Audio Frequencies
Learning to recognise different frequency ranges in detail is a long process of Ear Training. Studio engineers begin by learning to divide sound into 10 different frequency ranges (often called “frequency bands”) and then might extend to roughly 30 bands, which means that each band is about a third of an octave—just four notes, quite precise!
However you certainly don’t need to go that far to get the benefit of audio active listening, and tuning your ear in to this way of listening is actually very fast.
A great starting point is to develop your internal “3-Band EQ”: just dividing sound into bass, mid-range, and high frequencies. You’ve probably seen stereo equipment that provides exactly these three dials. In some cases there are just two dials, “bass” and “treble”, but this normally still provides the same three-band control (since cranking both the bass and treble right down will leave you with just the “mid” left over).
This is actually the best way to start tuning your ear in: play around with EQ settings on your music device.
Listen to the same section of the same track with the dials set to extremes. Adjust the dials for “all bass”, “all mid”, “all treble”, and listen to the effect on what you hear.
Which instruments do you hear in each band? How is their timbre affected?
This is an interesting exercise in itself, but here’s what’s really cool: after playing around like this on a variety of tracks and paying attention to how the sound is affected by tweaking those dials you will begin to hear these separate bands even when listening to the full, normal, sound. That is, you can ask your ears “What is happening in the bass range right now?” and mentally it will be like you just tweaked those dials to isolate the bass. Your ear will zone in on just that frequency range and pay attention to what’s going on there.
Naturally this skill is particularly useful if you’re involved in live sound, recording or mixing music. But it also gives you a deeper awareness and appreciation of everything that’s happening in those processes to produce the nice (or not-so-nice!) sound you hear at live gigs or on a recording. It’s a whole new window onto all the music you hear.
Audio Effects
The other major area of audio Active Listening is called audio effects. This encapsulates all the other things which can be affecting a sound apart from its component frequencies.
It is important to know that this idea of “effects” is just a helpful abstraction. In literal terms, a sound is defined completely by the mix of frequencies it contains over time. There is nothing “more” to it than that.
So any audio “effect” we discuss will, in some way or another, be some kind of modification of that sound’s frequencies. But for the sake of simplicity it’s useful to think about a sound’s “frequencies” as being all about pitch and EQ (as presented in the last section), and group the more complicated frequency behaviours under this idea of “effects”.
You will already be familiar with many common audio effects by name or by sound.
One prominent example would be “reverb”, meaning how much a sound echoes in an environment. We all know from our natural real-world experiences that if we were to sing in a tiny empty room it would sound quite different than if we were singing on a stage in a grand concert hall. Or that we can tell just by listening to a recording whether the microphone was right next to the speaker’s mouth or across the room from them, even if the overall volume is the same.
We’ll use reverb as our example here and then cover a few other common audio effects below to help you start listening for them.
The description of reverb above should help you see right away that audio effects occur naturally, even though the term is most commonly associated with artificial electronic processes that are applied to sound. So an audio effect can be a natural phenomenon, or it can be an artificial process applied to a sound (live or on a recording).
In the case of reverb, for example, there is a staggering range of guitar pedals, software plugins and other electronic devices to artificially add various kinds of reverb to an instrument or overall musical mix. This can be done to simulate a different environment (e.g. so the listener believes the notes were produced in a vast concert hall versus a cramped recording studio) or for artistic effect (e.g. just a small amount of reverb applied so as to “thicken” the sound and help the mix blend nicely into a coherent whole).
So what is the benefit of Active Listening for audio effects?
Well, the first thing to say is that there is a strong connection between audio effects and Timbre. So all the benefits of appreciating timbre already discussed apply here too.
One way to think about it is that “timbre” defines “how an instrument sounds” and then “audio effects” describes “how an instrument’s sound is modified from its default natural timbre”. So the timbre of an electric guitar is, depending on where you draw the line, a very simple and thin-sounding one. If you have heard an electric guitar string played without the guitar being plugged into an amplifier, or indeed plugged into an amplifier with no kind of effects applied, you will know: it sounds like a simple metal wire being twanged! However, the sound of an electric guitar can become wildly different and complex in all kinds of ways once audio effects are introduced. It can sound big, heavy and angry, it can sound pure, sweet and elegant, it can sound remarkably like a human voice, it can sound electronic and robotic—almost anything is possible through effects!
These days an instrument can even become simply a controller for completely synthesised sounds, where the resulting sound is completely independent of the natural timbre of the instrument. So in a way we’re talking about a spectrum, with natural timbre on one end, completely synthetic timbre on the other end, and in between there is a realm of audio effects where the natural timbre is adjusted more and more in artificial ways.
As well as the general “Timbre understanding” that Active Listening for audio effects provides, there is a practical angle: understanding audio effects lets you know what’s possible. This can be useful for your own performing and recording, but it also makes your mental model of the music you hear more vivid.
For example, when you listen to certain recordings you probably instinctively have a feel for the environment the musicians were playing in. An intimate recording of a singer-songwriter can make you feel like you’re right there with them, alone in a small room. A live recording of a packed stadium rock show can make you feel like you’re there in the crowd, surrounded by thousands of other fans, experiencing it in an exciting way.
This mental image of the music you are hearing and the powerful visceral impact it can have on your experience and enjoyment of the music is given greater richness and sophistication as you become more and more aware of the audio effects involved.
That’s not to say that you would literally be thinking through “Oh, I can hear a long-delay reverb therefore I must be in a stadium, and there is some distortion on the guitar therefore I know it’s an electric guitar plugged into an amp” etc.! But once you wake up your ear to all those factors you will instinctively and automatically conjure up more accurate and more powerful mental experiences as you listen to music.
We’ve mentioned reverb as one example. Let’s now look at a few other audio effects you might like to explore. Keep in mind that each audio effect can be applied to the overall sound, to individual instruments, or both.
Chorus
If we think purely in musical terms, there is no difference between a solo voice singing a melody and a choir of 20 people singing that same melody in unison. It might literally be a single word on the sheet music that indicates which of the two it should be! But of course the two sound completely different to us. Why is that?
The answer is that no two humans can perform the same music exactly the same, simultaneously. Even the same human in a recording studio trying to repeat exactly the same thing will end up producing something slightly different.
The obvious differences will be in the Rhythm. Notes will start and end at slightly different times, even if they are all played with the “correct” rhythms. And the Timbre may be different, especially in the case of human voices, where everyone’s voice is unique. It’s possible that the Pitch will differ too, especially with singing, even if they are technically playing or singing “the right notes”.
All these minute differences combine to create an effect called “chorus”. This happens in the real world when multiple musicians or singers perform together in unison, so you can think of it as “the choir effect”. This is where it gets its name from, in the sense of a choir being called a “chorus” (not from the “verse and chorus” sense of the word).
In its simplest synthetic form a chorus effect will create multiple copies of the same sound, starting at slightly different delays from the original. This creates a slightly artificial-sounding effect so must be used in moderation (or in its extreme, for intentional artistic effect). It adds a really distinctive colouring to the sound. It’s hard to put into words, but some people describe it as turning a single colour into a rainbow.
More sophisticated artificial choruses will vary the delay of each copy over time or adjust the EQ of each to better simulate actually having multiple performances.
Chorus effects can also be created artificially but manually, by recording multiple takes of the same section and laying them on top of each other in the studio. This is done a lot in rap music where a line or just a few key words will be recorded multiple times and laid on top of each other, so as to emphasise that line or word, as if multiple versions of the same rapper are saying it together. This has an interesting artistic effect on us as a listener: we perceive it as multiple voices but somehow we also understand that it is still just one person, so it doesn’t change our interpretation of it being one person communicating with us.
Distortion
Alongside reverb, distortion is probably the most commonly-used and most familiar audio effect in the modern age. The evolution of the electric guitar’s role in music would be very different without its close companion, distortion effects!
As you might guess from the name, distortion originates with an unwanted effect: when we crank the volume too high on audio equipment we overload the speakers, and the sound gets distorted from what it’s meant to be. It begins to sound “noisy” or “crunchy”. When distortion happens to the whole of a musical track featuring multiple instruments, it sounds pretty terrible.
Apart from this overall “crunchiness”, another way distortion manifests by accident is with feedback. When a microphone or an instrument’s pickup is placed close to its own speaker, the sound goes in a loop from mic to speaker, round and round, gathering volume each time and modifying its frequency mix, resulting in a loud high-pitched whine. You’ve no doubt heard this once or twice if you’ve ever been to a live rock gig or heard a sound-check before a performance.
However, used in certain ways and for only particular instruments in the mix, distortion becomes a powerful and flexible audio effect. Even feedback can be used intentionally, as in the case of rock guitarists intentionally approaching their amp on stage to produce feedback in a controlled way.
Applied intentionally, distortion can produce a surprising range of timbres. Sticking with the example of electric guitar (though distortion can be applied to any instrument), distortion can be created via the amplifier and/or dedicated effects pedals and can produce a really beautiful sweet sound, such as Carlos Santana’s signature guitar tone on “Smooth” or “Maria Maria”, a noisier tone, such as the classic opening riffs to Nirvana’s “Smells Like Teen Spirit” or Deep Purple’s “Smoke on the Water”, or a harsh, heavy, crunchy tone such as Black Sabbath’s “Iron Man”.
Auto-Tune
A prominent audio effect in recent years has been “auto-tune”. This was originally created as a tool to help studio engineers neaten-up the pitching of notes on recorded tracks. By nudging pitches closer to the “correct” pitch of each note, slightly sloppy performances could be transformed into clean, accurate renditions.
However, like many audio effects, it emerged that taking the effect to its extreme could have artistic potential, and so auto-tune became an audio effect used to dramatically change the perceived timbre of human voices, even to the extent of being used for comedic effect!
This is a good example of a general principle: when used in a tasteful way, audio effects are normally intended to be unnoticeable to the listener. They are used to enhance and adjust the sound, or to create new timbres which our ears happily accept as an instrument sound.
Auto-tune is a rare case where the average music listener is often consciously aware that a particular audio effect has been used. However, you as an Active Listener will soon have this awareness of all audio effects being applied—not just those being taken to an extreme!
Questions in Mind
Here are some questions you can use to explore Audio Frequencies and Effects.
Keep in mind that almost all questions can be asked for the whole piece of music overall, as well as for a particular instrument, a particular section, etc.
- What’s the overall frequency presence of this track e.g. does it sound broad and bassy or light and airy?
- What can I hear is present in the low-, mid-, and high-frequency bands?
- How does the frequency presence of each instrument combine to make a balanced overall sound? Are there any instruments which would normally occupy the same range and might have been adjusted with EQ to fit together better? e.g. sometimes a bass guitar will have its mids and highs reduced to keep it in its own space at the bottom of the mix, other times it might be trimmed to just the highs, to keep the overall track sounding light and airy.
- Can I discover anything new about how the music is put together by listening to just one particular frequency band?
- What audio effects can I hear? What does the overall environment sound like? e.g. does it sound like a pristine studio recording or a live concert hall performance.
- Is there a noticeable reverb? If so, does it sound like a small or large space?
- Are there noticeable effects on any particular instruments? e.g. distortion on a guitar
- Have any instruments’ timbres been transformed into something noticeably unnatural?
- Why have these effects been used? Is it to a subtle degree to create a certain musical atmosphere or is it to a greater extent to produce particular noticeably-different sounds?
- What influence does the audio frequency balance and the use of audio effects (if any) have on the artistic impact of this piece? How different would it be if those things were changed dramatically?


