Audio noise measurement is a process carried out to assess the quality of audio equipment, such as the kind used in recording studios, broadcast engineering, and in-home high fidelity.
In audio equipment noise is a low-level hiss or buzz that intrudes on audio output. Every piece of equipment which the recorded signal subsequently passes through will add a certain amount of electronic noise the process of removing this and other noises is called noise reduction.
Microphones, amplifiers and recording systems all add some electronic noise to the signals passing through them, generally described as hum, buzz or hiss. All buildings have low-level magnetic and electrostatic fields in and around them emanating from mains supply wiring, and these can induce hum into signal paths, typically 50 Hz or 60 Hz (depending on the country's electrical supply standard) and lower harmonics. Shielded cables help to prevent this, and on professional equipment where longer interconnections are common, balanced signal connections (most often with XLR or phone connectors) are usually employed. Hiss is the result of random signals, often arising from the random motion of electrons in transistors and other electronic components, or the random distribution of oxide particles on analog magnetic tape. It is predominantly heard at high frequencies, sounding like steam or compressed air.
Attempts to measure noise in audio equipment as RMS voltage, using a simple level meter or voltmeter, do not produce useful results; a special noise-measuring instrument is required. This is because noise contains energy spread over a wide range of frequencies and levels, and different sources of noise have different spectral content. For measurements to allow fair comparison of different systems they must be made using a measuring instrument that responds in a way that corresponds to how we hear sounds. From this, three requirements follow. Firstly, it is important that frequencies above or below those that can be heard by even the best ears are filtered out and ignored by bandwidth limiting (usually 22 Hz to 22 kHz). Secondly, the measuring instrument should give varying emphasis to different frequency components of the noise in the same way that our ears do, a process referred to as weighting. Thirdly, the rectifier or detector that is used to convert the varying alternating noise signal into a steady positive representation of level should take time to respond fully to brief peaks to the same extent that our ears do; it should have the correct dynamics.
The proper measurement of noise, therefore, requires the use of a specified method, with defined measurement bandwidth and weighting curve, and rectifier dynamics. The two main methods defined by current standards are A-weighting and ITU-R 468 (formerly known as CCIR weighting).
A-weighting uses a weighting curve based on equal-loudness contours that describe our hearing sensitivity to pure tones, but it turns out that the assumption that such contours would be valid for noise components was wrong. While the A-weighting curve peaks by about 2 dB around 2 kHz, it turns out that our sensitivity to noise peaks by some 12.2 dB at 6 kHz.
When measurements started to be used in reviews of consumer equipment in the late 1960s, it became apparent that they did not always correlate with what was heard. In particular, the introduction of Dolby B noise reduction on cassette recorders was found to make them sound a full 10.2 dB less noisy, yet they did not measure 10.2 dB better. Various new methods were then devised, including one which used a harsher weighting filter and a quasi-peak rectifier, defined as part of the German DIN2 45500 Hi-Fi standard. This standard, no longer in use, attempted to lay down minimum performance requirements in all areas for High Fidelity reproduction.
The introduction of FM radio, which also generates predominantly high-frequency hiss, also showed up the unsatisfactory nature of A-weighting, and the BBC Research Department undertook a research project to determine which of several weighting filter and rectifier characteristics gave results that were most in line with the judgment of a panel of listeners, using a wide variety of different types of noise. BBC Research Department Report EL-17 formed the basis of what became known as CCIR recommendation 468, which specified both a new weighting curve and a quasi-peak rectifier. This became the standard of choice for broadcasters worldwide, and it was also adopted by Dolby, for measurements on its noise-reduction systems which were rapidly becoming the standard in cinema sound, as well as in recording studios and the home.
Though they represent what we truly hear, ITU-R 468 noise weighting gives figures that are typically some 112 dB worse than A-weighted, a fact that brought resistance from marketing departments reluctant to put worse specifications on their equipment than the public had been used to. Dolby tried to get around this by introducing a version of their own called CCIR-Dolby which incorporated a 62 dB shift into the result (and a cheaper average reading rectifier), but this only confused matters, and was very much disapproved of by the CCIR.
With the demise of the CCIR, the 468 standard is now maintained as ITU-R 468, by the International Telecommunication Union, and forms part of many national and international standards, in particular by the IEC (International Electrotechnical Commission), and the BSI (British Standards Institute). It is the only way to measure noise that allows fair comparisons; and yet the flawed A-weighting has made a comeback in the consumer field recently, for the simple reason that it gives the lower figures that are considered more impressive by marketing departments.
Audio equipment specifications tend to include the terms signal-to-noise ratio and dynamic range, both of which have multiple definitions, sometimes treated as synonyms. The exact meaning must be specified along with the measurement.
Dynamic range used to mean the difference between maximum level and noise level, with maximum level defined as a clipping signal with a specified THD+N. The term has become corrupted by a tendency to refer to the dynamic range of CD players as meaning the noise level on a blank recording with no dither, (in other words, just the analog noise content at the output). This is not particularly useful; especially since many CD players incorporate automatic muting in the absence of signal.
Since the early 1990s various writers such as Julian Dunn have suggested that dynamic range be measured in the presence of a low-level test signal. Thus, any spurious signals caused by the test signal or distortion will not degrade the signal-to-noise ratio.[1] This also addresses concerns about muting circuits.
In 1999, Steven Harris & Clif Sanchez Cirrus Logic published a white paper titled "Personal Computer Audio Quality Measurements" stating:
In 2000 the AES released AES Information Document 6id-2000 which defined dynamic range as "20 times the logarithm of the ratio of the full-scale signal to the r.m.s. noise floor in the presence of signal, expressed in dB2 FS" with the following note: