PHASE = MC2
Technical Articles/June 7, 2013
Here are a few tips about phase: how to spot it, how to avoid it and how to cure it when it starts undermining your sound.
Understanding what phase is, how it’s generated and how to avoid its undesirable effects is vital to any good sonic outcome. So let’s start by brushing up on what phase is in the first place.
Simply put, a waveform that’s ‘out-of-phase’ is one that’s out of step with another of the same frequency. Phase is a relative term, meaning that one waveform on its own – it might be a pure sine wave, or a complex sound generated by a voice or instrument – can’t really be out of phase in any practical sense. It needs a partner (or partners) of identical (or similar) form to interact with. It’s this interaction that generates the phenomenon we call phase.
Described another way, imagine a soldier marching on his own… you can’t really say this guy is out of step if there’s no-one marching alongside him. But when there are 10 others all marching in perfect sync, leading with their left foot while he leads with his right, then he can be described as being out of phase relative to the group.
Indeed, if he’s perfectly out of step he can be described as antiphase to the group, and if he were a waveform, we could put him back in phase by simply hitting the phase (polarity) button on the console channel he’s marching down.
THE REAL WORLD OF AUDIO
Phase becomes more insidious when there are two or three complex waveform sources creating a sound, and the degree to which they’re out of step with one another is subtle. The vast majority of phase issues are of this type. These are harder to hear and more difficult to resolve. Unlike sine waves, real-world sounds generated by voices and instruments are far more complex in nature and rarely if ever behave symmetrically.
Most sounds we record, mix, master or reproduce live are made up of a multiplicity of frequencies all oscillating at different wavelengths. Consequently, the only time two identical versions of a complex waveform can be truly in phase with one another is when they start at an identical point in time. That way all the frequencies making up the sound (from very long to extremely short) are phase aligned and supportive of one another. When the two become misaligned in time, even by the smallest amount, phase problems creep in, and depending on how far out they get, certain frequencies will start to undermine one another more than others as the two versions sum together in the air.
Other factors influence phase coherence even when two complex waveforms share an identical start point. ‘In-the-box’ digital workstation processing in particular can wreak havoc on phase coherence, especially if there’s no delay compensation on offer, or where processing is applied to only one of the two components. Either of these situations will influence the phase relationship between the two waveforms anywhere from subtly (though sometimes beneficially it must be said) to diabolically.
STEREO TO MONO
Where two identical complex waveforms are perfectly out of phase – i.e., 180˚ out, oscillating in an antiphase manner to one another – that’s where you really need to pay attention. These will seem incredibly wide when panned hard left and right, and that can sometimes sound fantastic. Unfortunately, when combined in mono they will null – i.e., equal silence!
See the problem? If neglected or overlooked, stereo sounds of this type will be non-existent on AM radio for instance. Live it’s a different story of course. As long as you’re happy with a certain sound being mega wide – again, that might be appropriate or not – it’s less critical because the mix is a one-off event coming out of a stereo PA rather than an AM radio… unless of course the L/R feed is going live-to-air on AM radio or you’re recording it for later release. Then the problem is most certainly relevant.
There are several ways to combat phase problems.
The simplest way is to never use more than one input for any given source. If, for example, you’re in the studio overdubbing an electric guitar, the best way to protect yourself from phase problems is to use only one mic on the amp, and no D.I., or vice versa. Similarly, if you’re mixing an electric guitar that’s been recorded with three inputs, just ditch any two and you’ll be fine.
But if you want to record an instrument onto three tracks of audio from three separate sources – say two mics and a D.I. – you’ll need to make sure they’re phase coherent first. One of the simplest ways to do this is to arm all channels and record a short, sharp sound from the instrument into your DAW before tracking commences. Investigate these waveforms close up, checking that they align, particularly if the recording involves multiple close mics configured as stereo pairs or masquerading as one good ‘close’ sound. If the waveforms don’t line up, adjust the mic positions, record the short sharp sound again and re-evaluate.
Room and ambient mics obviously can’t be brought into line in this manner – they’re often metres away. That’s okay. Because they hear such a different version of the sound you’re recording, they typically have little or no impact on phase coherence anyway. The further apart they are relative to the source, and the less they have in common sonically, the less relevant their phase relationship becomes.
If you’re recording instruments like guitars, drums, basses or synths from a combination of D.I.’d signals and mics, the simplest and most practical approach to their time alignment is to either get a bunch of phase aligning tools like say, the Little Labs IBP, which allows you to fine tune phase rather than simply flip the polarity, or use plug-ins or manual alignment later. If you’re doing the latter I find it’s best to mute one of the elements during tracking so the phase discrepancies don’t drive you nuts.
When you’re mixing, there are several tricks available to you too. Firstly, you can cut to the chase and simply mute any mics that seem to do nothing but add phase problems. Some sounds are just better off without that second or third mic anyway.
Time align any phase anomalous mics that are very close together and similar in waveform appearance. This may require some patience to get them to settle down. Again, the best way to match the two waveforms up is by panning them hard left and right, listening for the sound growing fuller and more centred in the stereo image as the two waveforms are nudged into line. Remember: a super-wide, thin sound is a sure fire sign something is still not right.
The other approach – which I use regularly during mixing – is to turn any phase incoherent recording tracks into assets. Use them to generate spaces and ambience instead, by inserting EQs, compression, reverbs and delays across them. This quickly transforms their waveforms into something barely relevant to the original file, and thus inert with respect to phase problems.
Just remember, mono still matters! When complex waveforms A and B are panned wide in a stereo mix they might sound fantastic. But summed in mono and they equal silence! If the sound in question is your main melody, there won’t be a main melody!
I could go and on here but I won’t…
Don’t forget also that this advice is all pointless if your speakers aren’t in phase. Ditto all your mics and cables as well. Don’t assume they’re all perfect either… even new mics are sometimes manufactured out-of-phase by accident. Check everything.
There are other more technical aspects to phase that I’ve intentionally neglected to mention here for the sake of brevity and clarity. I’d urge anyone with more questions about phase to do some additional research of their own. To be unsure about phase is to be unsure about audio.
‘Til next time.
PS: The most common characteristic of an audio signal with phase problems is that it generally sounds thin, harsh, wide (if the elements are panned) and ‘hollow’ sounding. Of course, if you like that sort of sound – provided you can still hear it in mono – there’s no real technical reason to change it.
If you’re not sure, reduce the sound to one of its components and listen to how ‘solid’ it sounds out of just one mic… if it loses some of that weight when you unmute the second channel you know there’s a phase problem.
PSS: Buy a phase meter!