Monday, July 29, 2013

Eliminating vocal pops in Protools


In my last post I gave you guys a rundown on vocal pops and a couple of ways to avoid them when tracking vocals.  Unfortunately, plosives will sometimes rear their ugly heads during mixdown.  This can happen for several reasons, but if you come across one in your vocal track there is a way that you can fix it, or at the very least temper it to where it’s nearly unnoticeable.  Remember that a vocal pop is really nothing more than a loud, distorted low frequency burst, so we can essentially filter out the offending frequency.  This is where the wonderful world of non-real-time audio suite plug-ins really earns their keep.

First you have to identify the offending plosive in your vocal track.  It’s actually extremely easy to pinpoint, just listen for it, hit stop and zoom in to find the distorted waveform at the beginning of the phrase.


Once you’ve isolated the sucker, your next step is to highlight the portion of the waveform that is visibly distorted.  It’s also advisable to include a couple milliseconds before and after the plosive in your selection to make the eventual processing you’ll do sound smoother.


Now that your pop is selected in the timeline you need to go into your AudioSuite menu and select the very basic, EQ3 1-band.  Selecting the 1-band is important because your really only using it as a high-pass filter, any more bands are unnecessary and confusing for this operation.



In your EQ window, the first thing you want to do is change the type of filter to “high-pass”, 
which is the button that looks like a ramp going up.

Once you’ve done that you then need to select the cutoff frequency for your filter.  For vocals any frequency between 100Hz and 250Hz usually works best.  Anything above 250Hz won’t effect the pop, but will effect the overall sound of the vocal.  It’s important to experiment, as each one is different.  You can easily do this by clicking the speaker icon in the lower left hand corner of the window.  This will allow you to preview the processor on the track.


Now you’ve created a simple high pass filter that should eliminate your vocal pop.  After you’re satisfied with your settings you want to click render in order to apply it to the selected region of the track.


Once rendering is done, there will now be a new “processed” region created in the space of the old selection.  Your old clip will still be in your region or clip list if you need to recall it for any reason.  As you can see, your plosive is immediately better.  Listen to it a couple of times in the context of your mix to see how it sounds.  Hopefully this trick will alleviate some frustration when it comes to less-than-stellar vocal tracking.




Wednesday, July 24, 2013

The Problem with Plosives


In this weeks posts I’m going to discuss a bane of many an audio guy’s existence, the dirty, evil wench we call ‘plosives’. 

The dictionary defines it as, “of or pertaining to a consonant characterized by momentary complete closure at some part of the vocal tract causing stoppage of the flow of air, followed by sudden release of the compressed air”. 

What the what?

Basically, in layman’s terms the loud, sudden burst of air produced when we say words containing consonants, like b, p, d, and t.  In everyday life, ‘plosives’ aren’t a nuisance at all, they’re actually an integral part of speech and communication, but when it comes to recording, they can be incredibly detrimental.  In recording they are more commonly referred to as vocal pops, because of the distorted “pop” sound they produce.  Most microphones can’t effectively handle this sudden burst of air, creating an asymmetrical distorted waveform on whatever recording medium you’re using.  For example, try holding the palm of your hand in front of your mouth while saying words that begin with b or p.  Do you feel that quick burst of air against your palm?  That is exactly what’s going into the diaphragm of the microphone.  The effect is basically mechanical clipping, and most every microphone is susceptible to this.

Waveform representation of a 'plosive'


Luckily for us there are ways around this devil.

Pop Filter



If you have one available, this will sufficiently diffuse the offensive burst of air, yet sometimes even a sweet pop filter won’t completely alleviate the nasty plosive.  When this happens your next line of defense is the microphone’s placement.  It’s advisable to rarely ever place the capsule of the microphone directly, and closely, in front of the vocalist’s mouth.   By moving the microphone to either side, or even slightly above with a combination of the pop filter, ‘plosives’ will be 99.9% eliminated.

Pencil Trick

Needless to say, I’ve had many a session where a pop filter wasn’t an option and microphone placement wasn’t doing the job.  A neat little trick I’ve picked up along the way involves taping a pencil (or pen) to the front of the microphone.  Believe it or not this actually kind of works when you’re in a pinch.  Try it out for yourself and see how it works.

Sometimes these nasty things rear their ugly heads after you’ve already tracked and mixing has begun.  In my next post I will give you a tried and true technique for eliminating ‘plosives’ from an audio track in ProTools. 


Saturday, July 20, 2013

Get in Tune


The first and most important step when recording a band is making sure each instrument is in tune.  For the most part, each musician has a handle on their own instrument and how to keep it in tune.  Some use a digital tuner, via either an amp or a pedal, but some still prefer to tune by ear.  As an engineer, hearing an instrument out of tune is one of the best ways to bring a session to a stand still.  Recording or rehearsal stops while the guilty player fixes the problem and valuable time is wasted.  The ability to hear this problem isn’t exclusive to those with a “trained” ear; but it is always obvious and very annoying.  Actually, as an engineer, you’ll most likely notice it before the musician because of your objective point of view.  One technique anyone can use, whether you play an instrument or not, is tuning harmonically. 

Tuning an instrument harmonically involves a bit of physics that I won’t delve into right now, but if you can hear “beats” between two tones (which anyone can) you can tune any instrument this way. 

When you pluck a string, it will vibrate at its fundamental frequency (giving the string it’s pitch) and harmonic frequencies (giving it’s timbre).  By lightly touching open strings at particular intervals (5th, 7th, and 12th, frets) you can isolate these upper harmonics.  For instance, lightly touching the 5th fret on the lowest string of a guitar should produce the same harmonic note as lightly touching the 7th fret on the string above, which would be the A string.  If these two tones are in tune and rung out simultaneously, you will not hear an audible difference in pitch (i.e. no beating between notes).  This means that the two strings are in tune with each other and you can continue the process down the strings.


If these two tones are out of tune, you will hear an audible “beating” between the notes, signifying that there is a slight difference in frequency.  If the frequency doesn’t match up you will hear beating as a result of amplitude modulation between the two tones.  The further away the two tones become, the more out of phase they become, hence the more out of tune and the faster the beating between the notes.   
 beats diagram

But as the frequencies approach each other, the beating slows and eventually disappears, creating a unified note.  This is when the two strings are in tune with each other.


I think this is the best way to get an instrument in tune, given that there is a correct reference note, but there are some detractors, albeit very picky ones.

Monday, July 15, 2013

Stop. Think. Mix

So what is mixing and why do we even need to do it?  Sounds simple enough, right?  Well, not as simple as you might think.  Mixing audio properly takes a lot of time and a hell of a lot of failures before you can really get it right.  Most people go into a studio, sit down at a console with their session in front of them and proceed to turn knobs and move faders at random.  Don't get me wrong, there is nothing wrong with this exploratory technique, after all how are you ever going to know what a compressor does to the sound of something or what notching out a certain frequency will sound like without actually hearing it, but this technique takes way too much time, and sometimes time is money.  So, how could we go into a mixing session with a logical plan in mind?  This is where an understanding of acoustics, and more importantly, psychoacoustics can help us.  Acoustics is the scientific study of sound and psychoacoustics is the study of how we, as people, perceive that sound.  So knowing how, in a broad sense, people will hear your mix gives you that extra edge over some other mixing engineers.