Audio Myths
The science of audio and the art of music have a rich history of fiction woven into the fabric. Here I will try to dispel some of the myths. Let the myth-busting begin (be warned - your favourite 'but I thought' myth may be exploded here)...
440hz Music - Conspiracy To Detune Us From Natural 432Hz Harmonics?
Not really worthy to be called a myth. The title of this complete bunkum says it all. Pure pseudo-science. Read and laugh (if you don't, seek help).
one for the riggers...
Did you know that Aluminium is heavier than concrete (and glass too)? The weight of Aluminium is around 2,700kg per cu. m, whereas Concrete is around 2,300kg per cu. m. The reason people get caught out on this is that Aluminium has a far greater strength to volume ratio than Concrete (which explains why you can make a boat out of concrete, but not an airplane). Due to this we are used to extruded and rolled fabrications of Aluminium, so that when we compare its weight with Steel it is relatively light.
The fader-gain mixer mixup
This one came up again the other day, and it is not the first time I have come across an experienced sound engineer with a lack of understanding on the basic and important issue of Gain Staging. The myth hinges on the claim that the best way to set levels is "bring the channel faders up to zero, then adjust the gains to get a good mix" (1).
Before we examine what exactly is wrong with this method we need to establish what is the correct method, so here it is: Use PFL to set the gain (trim) control so that the input signal is peaking at 0dB on the PFL meter (2). The object of this is to bring the input level up at the start (ie using the mic pre). Then the right level is set for the mixer electronics to operate properly. Usually this nominal level will be line level (ie -10dBV or +4dBu). This is the level that will show 0dB on the meters. This is the ideal level to put through the mixer, and this is why a manufacturer calibrates the mixer for this particular level. You can check this out by putting sinewave testtone through the mixer.
Because we have a varying signal from an instrument or voice coming in, the point of setting the input level to optimum is even more important. Also, there will be multiple inputs (otherwise there is no point having a mixer!) so these two things together mean the levels need to be kept below the distortion level, and above the noise floor. If we misalign the gain control we will compromise the mixers ability to provide the widest dynamic range.
If you have been following this, you will now see why method 1 is erroneous. The level setting is arbitrary, and there is no clear distinction between the operations of level setting and mixing. And yet this method persists because many sound guys simply don't have a proper technical training. Here's two examples: "I have a habbit of setting up the quick way as well though I usually try with the headphones first. I pefer not to use the DB meter as I find that higher frequencies that sound louder dont really register on it so you can finish up with a mix that is electrically balanced but accoustically unbalanced. Unless you have knobs and faders all over the place. That's why I pregain off the headphones so I hear it instead of relying on a bass heavy VU meter. That way when I pregain everything even off the headphones I then know that 0 if foreground and -5 is background and 6 is about right in the foldback regardless of the frequency range of the voince/instrument coming through." Well, ok, you don't need to have passed High School English to be a Sound Engineer, but apart from that, this person's procedure is just bizarre. Here is another example from the Centre for Recording Arts, School of Music, Georgia State University: "First, it's important to establish the gain through each input channel by setting the channel fader at 0 dB and adjusting the input gain/trim control to maintain a usable signal level as indicated by the board's meters." (http://cara.gsu.edu/courses/MI_3110/gainstaging/gain31.htm). The trouble here is that there is no way to identify a usable signal level on the meters. Many only go down to -20dB, and as most mixes will involve more than 6 channels an individual signal level will not even show on the mix meters (each channel adds 3dB to the mix).
To sum up: the mixer needs its levels set for its own sake, if it is to work at its best without introducing hiss or signal distortion. The operator needs to be able to have good control of each sound and this is exactly what the faders are for. By setting up the levels correctly (ie method 2), the fullest range of fader control is possible. Furthermore, setting other levels to be correct (by the same criteria) is only possible if the input level is right. So the flow-on effect of further misalignment of level (aux sends, FX sidechains etc) is avoided.
Sometimes, to further cloud the issue, avoiding feedback gets mentioned as the reason for using method 1. The fact is that feedback occurs when the total system (ie a signal path from mic through to speaker) exceeds the Gain Before Feedback. More precisely it is when NAG >PAG (ie the Needed Acoustic Gain exceeds the Potential Acoustic Gain) for the system. A mixer obviously contributes some of the gain but it does not matter how that is achieved, as far a feedback is concerned. It is simply one block in the gain structure of the whole PA system.
Before we examine what exactly is wrong with this method we need to establish what is the correct method, so here it is: Use PFL to set the gain (trim) control so that the input signal is peaking at 0dB on the PFL meter (2). The object of this is to bring the input level up at the start (ie using the mic pre). Then the right level is set for the mixer electronics to operate properly. Usually this nominal level will be line level (ie -10dBV or +4dBu). This is the level that will show 0dB on the meters. This is the ideal level to put through the mixer, and this is why a manufacturer calibrates the mixer for this particular level. You can check this out by putting sinewave testtone through the mixer.
Because we have a varying signal from an instrument or voice coming in, the point of setting the input level to optimum is even more important. Also, there will be multiple inputs (otherwise there is no point having a mixer!) so these two things together mean the levels need to be kept below the distortion level, and above the noise floor. If we misalign the gain control we will compromise the mixers ability to provide the widest dynamic range.
If you have been following this, you will now see why method 1 is erroneous. The level setting is arbitrary, and there is no clear distinction between the operations of level setting and mixing. And yet this method persists because many sound guys simply don't have a proper technical training. Here's two examples: "I have a habbit of setting up the quick way as well though I usually try with the headphones first. I pefer not to use the DB meter as I find that higher frequencies that sound louder dont really register on it so you can finish up with a mix that is electrically balanced but accoustically unbalanced. Unless you have knobs and faders all over the place. That's why I pregain off the headphones so I hear it instead of relying on a bass heavy VU meter. That way when I pregain everything even off the headphones I then know that 0 if foreground and -5 is background and 6 is about right in the foldback regardless of the frequency range of the voince/instrument coming through." Well, ok, you don't need to have passed High School English to be a Sound Engineer, but apart from that, this person's procedure is just bizarre. Here is another example from the Centre for Recording Arts, School of Music, Georgia State University: "First, it's important to establish the gain through each input channel by setting the channel fader at 0 dB and adjusting the input gain/trim control to maintain a usable signal level as indicated by the board's meters." (http://cara.gsu.edu/courses/MI_3110/gainstaging/gain31.htm). The trouble here is that there is no way to identify a usable signal level on the meters. Many only go down to -20dB, and as most mixes will involve more than 6 channels an individual signal level will not even show on the mix meters (each channel adds 3dB to the mix).
To sum up: the mixer needs its levels set for its own sake, if it is to work at its best without introducing hiss or signal distortion. The operator needs to be able to have good control of each sound and this is exactly what the faders are for. By setting up the levels correctly (ie method 2), the fullest range of fader control is possible. Furthermore, setting other levels to be correct (by the same criteria) is only possible if the input level is right. So the flow-on effect of further misalignment of level (aux sends, FX sidechains etc) is avoided.
Sometimes, to further cloud the issue, avoiding feedback gets mentioned as the reason for using method 1. The fact is that feedback occurs when the total system (ie a signal path from mic through to speaker) exceeds the Gain Before Feedback. More precisely it is when NAG >PAG (ie the Needed Acoustic Gain exceeds the Potential Acoustic Gain) for the system. A mixer obviously contributes some of the gain but it does not matter how that is achieved, as far a feedback is concerned. It is simply one block in the gain structure of the whole PA system.
Miking a guitar cab
The myth here is that the angle of the mic matters (in terms of guitar tone). The conventional wisdom passed down has been that to get the best sound the mic is angled a bit (usually between 15º and 30º). Alex Case did an experiment and published the results in the AES journal (vol 58-1/2). Unless the angle is set beyond 45º there is very little difference in response, but as cancellation effects are gradually increasing as the mic is turned off-axis; for best results use 0º. That's not to say that mic angle doesn't matter when it comes to rejecting unwanted sounds. And then there are those that want a lofi phase-cancelled sound - in that case a 90º angle would work better!
The Speed of Sound
There are two myths to expose here - that the speed of sound is constant (eg 343 m/s), and that the speed of sound varies with atmospheric pressure. Even Google is confused: the Google calculator states that the speed of sound at sea level = 340.29 m/s. The fact is that this is only true at 15ºC, as the speed varies with temperature (and to a considerably smaller degree, humidity).
Why no standard wiring for a 3 pin XLR?
Recently I stumbled across what started off looking like a good basic tutorial website for audio. It is mediacollege.com and things changed when I got to the page on balanced audio, called 'combining cables'. I quote "Unfortunately there is no official standard for wiring balanced audio cables, but the most common configuration is:
Pin 1: Shield (Ground)
Pin 2: Hot
Pin 3: Cold"
It then goes on to state that a phase switch is used to sort the ensuing mess out.
Not the first time I have heard this, but in fact there is a standard, AND IT HAS BEEN AROUND SINCE 1975! So, why all the ignorance?
The standard (which is the same pin wiring as shown above), is IEC 268-12 (part 12). Over the years every organisation (Radio, TV etc) has adopted this, and here's the proof:
RP 134 (1986)
EBU R50 (1988)
ANSI S4.43 (1991)
and AES 14 (1992)
These are all endorsements of the original IEC standard.
So, there has been one right way to wire an XLR-3 connector for over 35 years. These standards are International too (except the ANSI one), so there really is no excuse.
Pin 1: Shield (Ground)
Pin 2: Hot
Pin 3: Cold"
It then goes on to state that a phase switch is used to sort the ensuing mess out.
Not the first time I have heard this, but in fact there is a standard, AND IT HAS BEEN AROUND SINCE 1975! So, why all the ignorance?
The standard (which is the same pin wiring as shown above), is IEC 268-12 (part 12). Over the years every organisation (Radio, TV etc) has adopted this, and here's the proof:
RP 134 (1986)
EBU R50 (1988)
ANSI S4.43 (1991)
and AES 14 (1992)
These are all endorsements of the original IEC standard.
So, there has been one right way to wire an XLR-3 connector for over 35 years. These standards are International too (except the ANSI one), so there really is no excuse.
Faster is better
Some professional analogue tape machines run at 30 IPS instead of 15 IPS (and some have both speeds). The myth here is that it is always better to record at the higher speed, and the only reason that you wouldn't is that the tape is being used so fast it isn't funny. Well it turns out that as you gain at the high frequency end you actually loose at the low frequencies. Let's look at the frequency response from the spec. page of a typical machine; the Otari MTR-10-2. These are the figures for both the record and repro. heads:
15 IPS: 18Hz to 27kHz +0.5/-2dB, 30 IPS: 42Hz to 29kHz +0.5/-2dB
So you can see that in this case the slight gain at the to end (which is fine at 15 IPS) is more than offset by the drop in bass response. In fact we are loosing one octave, right where we need it for bass guitar and kick drum. On the very best 2 tracks running 1/2" tape there is less loss at the low frequency end at 30 IPS, but the problem cannot be eliminated. As most people are now using analogue to 'warm up' contemporary music it is counter-productive to use 30 IPS, as part of the desired sound is that full bottom-end that tape gives. Sticking to 15 IPS your music will sound better and you save tape - double whammy!
For some interesting graphs of several pro ATR frequency responses (at 15 and 30 IPS) go to Jack Endino's page. These show this effect clearly, as well as identifying the frequency response 'character' of each model.
15 IPS: 18Hz to 27kHz +0.5/-2dB, 30 IPS: 42Hz to 29kHz +0.5/-2dB
So you can see that in this case the slight gain at the to end (which is fine at 15 IPS) is more than offset by the drop in bass response. In fact we are loosing one octave, right where we need it for bass guitar and kick drum. On the very best 2 tracks running 1/2" tape there is less loss at the low frequency end at 30 IPS, but the problem cannot be eliminated. As most people are now using analogue to 'warm up' contemporary music it is counter-productive to use 30 IPS, as part of the desired sound is that full bottom-end that tape gives. Sticking to 15 IPS your music will sound better and you save tape - double whammy!
For some interesting graphs of several pro ATR frequency responses (at 15 and 30 IPS) go to Jack Endino's page. These show this effect clearly, as well as identifying the frequency response 'character' of each model.
dBVU????
dB without a reference is just a ratio so dBVU implies that the reference is VU.
So what exactly is VU?
The Volume Unit is a logarithmic ratio similar to dB. There are, however, several differences:
a) VU relates to a mechanical meter movement where the meter ballistics are within this range: rise = 300mS , fall = 300mS. This is to give the meter a characteristic, which approximates ‘loudness’.
The behavior of VU meters is defined in ANSI C16.5-1942, British Standard BS 6840, and IEC 60268-17.
b) To achieve the required damping a 3K6Ω resistor is internally in series with the signal. This reduces the signal by 4dB so that a correctly calibrated VU meter will actually read 0 VU when it’s input level is +4 dBm, or 1.228 volts RMS across a 600 ohm load. This is why a piece of pro audio gear will state +4dBu for the nominal operating level, and a tone at this level will show 0 on the VU meter.
c) VU meters can have different reference levels. For radio, 0VU means maximum allowable modulation percentage (100%). 0VU on a tape recorder can be referenced to a maximum allowable flux level (eg 250 nW), or to 1% THD. Only electrical levels are defined in the standard.
On audio equipment the LED meter is commonly used and has a dB scale (this is not the same as a true dB meter, which is only used on sinewave tone). It is calibrated so 0dB = +4dBu. Therefore 0dB = 0VU.
So where does dBVU fit in?
As far as I can tell it comes from an error of understanding published on the Internet by Peter Elsea in 1996. In an article on decibels he has a paragraph headed dBVU. He then goes on to describe VU erroneously using the term dBVU. A few sentences later he reverts to (rightly) describing the levels and meters as VU.
The Internet being what it is, this mis-information has spread and it is now not hard to find the term being used, especially on forums (eg “what is the difference between 0dBVU and 0dBFS?”).
http://arts.ucsc.edu/ems/music/tech_background/TE-06/teces_06.html
dB and VU scales are different. They are both logarithmic, and for calibration purposes a test tone that drives a VU meter to zero can also be assumed to drive a dB meter to zero. This is purely because peak reading LED meters scaled in dB have followed on from VU meters, so that on a piece of professional gear zero on the meter still equals +4dBu. Similarly a -6dBu tone will read -10 on either meter.
The term dBVU is nonsense because it would mean referencing the dB to VU, and also VU to dB. In other words there would be no reference.
In fact they are two different scales (that happen to ‘meet’ at zero if the input signal is a sinewave tone). Importantly, if we put a complex waveform into these two types of meter each will give a different reading. Furthermore, the transients of programme material (ie speech, music) will be shown differently, with the VU meter rounding out the signal and therefore giving a better indication of the overall loudness and crest factor. The peak dB meter gives the instantaneous maximum level and is therefore useful for setting equipment programme levels accurately to avoid clipping. It should also be noted that the VU meter’s response is standardised, whereas the (artificial) rise and fall time of a LED meter varies according to manufacturer.
http://digitalcontentproducer.com/mag/avinstall_understanding_volume_unit/
http://72.14.253.104/search?q=cache:TpmNiLpWFecJ:www.aes.org/aeshc/pdf/mcknight_qa-on-the-svi-6.pdf+VU+meters+complex+waves&hl=en&ct=clnk&cd=3&gl=nz
So what exactly is VU?
The Volume Unit is a logarithmic ratio similar to dB. There are, however, several differences:
a) VU relates to a mechanical meter movement where the meter ballistics are within this range: rise = 300mS , fall = 300mS. This is to give the meter a characteristic, which approximates ‘loudness’.
The behavior of VU meters is defined in ANSI C16.5-1942, British Standard BS 6840, and IEC 60268-17.
b) To achieve the required damping a 3K6Ω resistor is internally in series with the signal. This reduces the signal by 4dB so that a correctly calibrated VU meter will actually read 0 VU when it’s input level is +4 dBm, or 1.228 volts RMS across a 600 ohm load. This is why a piece of pro audio gear will state +4dBu for the nominal operating level, and a tone at this level will show 0 on the VU meter.
c) VU meters can have different reference levels. For radio, 0VU means maximum allowable modulation percentage (100%). 0VU on a tape recorder can be referenced to a maximum allowable flux level (eg 250 nW), or to 1% THD. Only electrical levels are defined in the standard.
On audio equipment the LED meter is commonly used and has a dB scale (this is not the same as a true dB meter, which is only used on sinewave tone). It is calibrated so 0dB = +4dBu. Therefore 0dB = 0VU.
So where does dBVU fit in?
As far as I can tell it comes from an error of understanding published on the Internet by Peter Elsea in 1996. In an article on decibels he has a paragraph headed dBVU. He then goes on to describe VU erroneously using the term dBVU. A few sentences later he reverts to (rightly) describing the levels and meters as VU.
The Internet being what it is, this mis-information has spread and it is now not hard to find the term being used, especially on forums (eg “what is the difference between 0dBVU and 0dBFS?”).
http://arts.ucsc.edu/ems/music/tech_background/TE-06/teces_06.html
dB and VU scales are different. They are both logarithmic, and for calibration purposes a test tone that drives a VU meter to zero can also be assumed to drive a dB meter to zero. This is purely because peak reading LED meters scaled in dB have followed on from VU meters, so that on a piece of professional gear zero on the meter still equals +4dBu. Similarly a -6dBu tone will read -10 on either meter.
The term dBVU is nonsense because it would mean referencing the dB to VU, and also VU to dB. In other words there would be no reference.
In fact they are two different scales (that happen to ‘meet’ at zero if the input signal is a sinewave tone). Importantly, if we put a complex waveform into these two types of meter each will give a different reading. Furthermore, the transients of programme material (ie speech, music) will be shown differently, with the VU meter rounding out the signal and therefore giving a better indication of the overall loudness and crest factor. The peak dB meter gives the instantaneous maximum level and is therefore useful for setting equipment programme levels accurately to avoid clipping. It should also be noted that the VU meter’s response is standardised, whereas the (artificial) rise and fall time of a LED meter varies according to manufacturer.
http://digitalcontentproducer.com/mag/avinstall_understanding_volume_unit/
http://72.14.253.104/search?q=cache:TpmNiLpWFecJ:www.aes.org/aeshc/pdf/mcknight_qa-on-the-svi-6.pdf+VU+meters+complex+waves&hl=en&ct=clnk&cd=3&gl=nz
The Golden Ratio
Myths abound about the magical properties of the Golden Ratio. A Stanford University mathematician sets the record straight, here.