All Things Being EQ-ual, pt. 1
A Three-Part Series on the Basics of Equalization

Lionel Dumond
Media and Mastering Editor

[Excerpts---If the article seems a little disjointed, it's because I omitted a lot of unnecessary stuff--John.]

You've soloed every track and listened. The bass sounds fat. The guitar is punchy and open. The kick is round and snappy. The snare is... well, it's very "snarey" sounding.

So, how come your mix sounds like oatmeal?

What the Heck?

Equalization, or EQ, is a process by which a specific parts or parts of the audible frequency spectrum are either cut or boosted, in order to change a sound.


Sometimes, Less is More

Having good EQ capabilities at your disposal is not an excuse to get lazy! Getting good sound on the rust is, first and foremost, a matter of choosing the right mic, placing it in just the right spot, and, of course, having a quality instrument, properly tuned, in front of that mic. Trying to EQ a kick drum at mixdown that is tuned looser than your Aunt Gertrude's knickers can be a nightmare. Go ahead and boost 3k on that kick track all you want to -- but you'll soon learn that you can't effectively boost what isn't there in the first place. Good mics, proper technique, and great instruments are the ideal, and often make EQ adjustments unnecessary. If you've done everything right, you may very well find that the best EQ is none at all!

So much for the ideal -- now let's get practical. As we all know, time and budget constraints in the studio can create conditions that are not always ideal. You won't always have the perfect mic at your disposal. Not every acoustic guitar you will record is going to be a $2,000 Taylor. And it can be detrimental to your client's happiness (and thus your bottom line) to spend 90 minutes experimenting with how far off-axis you should mike that Fender Twin. In situations like this, EQ is often your only salvation. When you've done the best you can, yet that timbre isn't exactly what you were going for, judicious use of EQ can mean the difference between greatness and... ugh... so-so-ness.

Musical Shoehorn

It's often useful to think of mixing a multitrack recording as akin to putting together a giant sonic jigsaw puzzle. You job is to take all of the "pieces" (tracks), spread them across your "desk" (mixing console) and make them all fit into a beautifully assembled, suitable-for-framing portrait of a bowl of fruit.


When listening to a soloed track, all by it's lonesome, it may sound great. A guitar track that really spreads across the spectrum can sound wonderfully cool by itself. A bass track can sound incredible fat and punchy if it contains everything from 60Hz to 4kHz. A piano can really sparkle, and that synth patch might knock your socks off. But take all these beautiful colors and mix them together, and you'll likely get what you'd see if you mixed all of the beautiful separate colors from a painter's palette together-- the sonic equivalent of something resembling a yucky brown goop!

The idea is to allow each instrument to occupy it's own "place" in the mix so that, like a great painting, it has powerful impact as a whole, yet you can "see" (or, in our case, hear) all the individual parts as well. There are generally four ways that producers and mixing engineers accomplish this on your favorite records:

Volume (the setting of relative track levels to achieve timbral balance)
Soundstaging (the use of panning and ambiance to separate timbres in physical space);
Time (the use of delay and/or performance/arrangement techniques to separate timbres in time);
EQ (the use of EQ to separate timbres across the frequency spectrum).

The next time you listen to a great record, try to see if you can figure out which of these three techniques are being used. Chances are, you'll hear a bit of all four at the same time! But since this is an article about EQ, we'll focus on that technique herein.

Perhaps at this point, a concrete example is in order. (By now you must be thinking, "Hey Lionel, it's about time!") Okay, let's say that you are Roger Nichols. You are working with this hot band called Steely Dan, and you've just finished tracks for a great new song called "Peg".


A lot of engineers like to build a mix from the bottom-up and from the center-out -- at least that's the way many engineers approach things at first. So let's say you've got this smokin' poppin' Chuck Rainey bass track to play with, and you've also got that groovy Bernard Purdie kick-drum track. On most pop records, the bass and kick together represent the bottom-end foundation of the tune, providing the very basic rhythmic feel of the whole piece, which in turn greatly effects the feel of the song in general. The kick-bass relationship is one of the critical cues that all listeners key in on, whether they themselves realize it or not!

So it makes sense to ask yourself at this point, "What is the basic vibe that we want to convey here?" As a mixing engineer, you must have a very clear idea of the style of the music being played, and the overall feel that the artists are trying to put across. This is very important! As with most endeavors, if you have no idea where you are going, you are unlikely to end up where you wanted to be.

So you like the nice, fat, round bass and all those cool slides. You also like that pop'n'snap thing that the bass did, and you definitely want to keep that, too.

You note that the roundness of the bass track lies in the 60 Hz to 150 Hz range. And that pop'n'snap thing is up there around 2.5kHz to 3kHz or so. But you know that, on a lot of electric bass parts, the frequencies around 250Hz can mud up the sound. You decide to cut a little around 250 Hz and see what happens. Whoa! Can you hear the meat of the kick drum a little better now? The bass and drums aren't stepping on each other so much any more because you've grooved out a little part on the bass track for the kick to come through.

You blend in the guitar part now, but decide to apply a highpass EQ to that track to cut everything below 80Hz. This leaves the guitar feel intact, yet leaves plenty of room for the bass and kick to breathe. Are you starting to get it now? Cool! Your mix is starting to really come together! You continue to EQ in this manner until the song is done.

You should be starting to understand now how mixing a song is like a jigsaw puzzle (remember that metaphor?) EQ is one way to make the pieces of a song fit all together. I'm not exactly sure when all of this started to become standard practice, but I was once told that this EQ technique was first used at Motown, and if you listen to those great old Berry Gordy recordings you'll definitely hear it happening.

[Excerpts from Part Two :]

All sounds contain several frequencies -- in fact, usually many thousands of different frequencies, each at varying amplitudes (loudness). A simple sine wave is the only type of sound that contains one and only one frequency

It's important to learn, over time and with practice, the "sound" of each frequency and the number of Hz that corresponds to it. To be able to identify frequencies and frequency ranges by ear is as vital a skill to an engineer, as being able to play tunes by ear is to a musician. Remember that! As an experienced recording and mastering engineer, I've developed the ability to hear a track or a mix and pretty much tell by ear what frequencies I'm going to need to deal with, so I know straight away what to grab for. I practice and hone this skill every chance I get, and I constantly get better at it the more I do. You should do the same -- it's a valuable skill to have!

You should also be familiar with the frequency ranges of various instruments that you're likely to come across. For example, I know that the fundamental frequency of the lowest note on a piano is 28 Hz, and the highest note is at 4186 Hz. These are just things you should make yourself aware of.

Okay... so when you cut or boost using an equalizer, you are affecting not a single frequency, but a range, or "band", if you will, of frequencies. It could be a narrow band, or a wide band.

"Q" is not a musical concept at all; it is basically an geek term made up by engineering dweebs who have no life. You know how I hate rules, but now, I am about to lay down a rule and you best remember it. Cool people express equalizer bandwidth in terms of octaves, not Q. Don't ever say in a professional studio, "I think we ought to boost that track at 8 kHz with a Q of three." People will think you're a dork. No one knows or cares what the hell a "Q" is. "Half an octave", now that's something a musician can relate to.

So why memorize the concept of "Q"? Just remember this: the higher the value of Q, the narrower the range being affected; the lower the Q, the wider the range being affected. That's all you really need to know.

Practically speaking, I find that one octave or so is a good bandwidth to start out with for most general EQ tasks. An octave is generally narrow enough to get close to the frequencies you're after, while wide enough to not have too radical an effect. A wider bandwidth (two or three octaves) is good for less specific overall coloration. A narrower bandwidth (1/3 octave or less) is generally used for cutting problem frequencies, such as line noise, or for feedback control in a live sound situation.

I suggest that you become comfortable with all of the EQ options (low-cut, low-pass, low-shelf, high-pass, high-shelf, Sweepable, Graphic & Parametric), and understand how they work. It can be tricky, because practically speaking, the sonic differences among different filters can be pretty subtle.

[End of File]