Dolby noise-reduction system
A Dolby noise-reduction system is one of a series of noise reduction systems developed by Dolby Laboratories for use in analog audio tape recording. The first was Dolby A, a professional broadband noise reduction system for recording studios that was first demonstrated in 1965, but the best-known is Dolby B, a sliding band system for the consumer market, which helped make high fidelity practical on cassette tapes, which used a relatively noisy tape size and speed. It is common on high-fidelity stereo tape players and recorders to the present day. Of the noise reduction systems, Dolby A and [|Dolby SR] were developed for professional use. Dolby B, [|C], and [|S] were designed for the consumer market. Aside from Dolby HX, all the Dolby variants work by companding: compressing the dynamic range of the sound during recording, and expanding it during playback.
Process
When recording a signal on magnetic tape, there is a low level of noise in the background which sounds like hissing. One way to counter this is to use low-noise tape, which records more signal and less noise. Other solutions are to run the tape at a higher speed or use a wider tape. Cassette tapes were originally designed to trade fidelity for the convenience of portability by using a narrow tape running at a slow speed of, housed in a simple plastic shell. This was at a time when wider tapes at or were used for high fidelity recordings, and for lower fidelity. As a result of their narrow tracks and slow speed, cassettes make tape hiss noticeable.Dolby noise reduction is a form of dynamic pre-emphasis employed during recording, plus a form of dynamic de-emphasis used during playback, which work in tandem to improve the signal-to-noise ratio. The signal-to-noise ratio is simply how large the music signal is compared to the low level of tape noise with no signal. When the music is loud, the low background hiss level is not noticeable, but when the music is soft or in silence, most or all of what can be heard is the noise. If the recording level is adjusted so that the music is always loud, then the low-level noise would not be audible.
One cannot simply increase the volume of the recording to achieve this end; tapes have a maximum volume they can record, so already-loud sounds will become distorted. The idea is to increase the volume of the recording only when the original material is not already loud, and then reduce the volume by the same amount on playback so that the signal returns to the original volume levels. When the volume is reduced on playback, the noise level is reduced by the same amount. This basic concept, increasing the volume to overwhelm inherent noise, is known as pre-emphasis, and is found in a number of products.
On top of this basic concept, Dolby noise reduction systems add another improvement. This takes into account the fact that tape noise is largely heard at frequencies above 1,000 Hz. It is the lower-frequency sounds that are often loud, like drum beats, so by only applying the companding to certain frequencies, the total amount of distortion of the original signal can be reduced and focused only on the problematic frequencies. The differences in the various Dolby products are largely evident in the precise set of frequencies that they use and the amount of modification of the original signal volume that is applied to each of the frequency bands.
Within each band, the amount of pre-emphasis applied depends on the original signal volume. For instance, in Dolby B, a low-level signal will be boosted by 10 dB, while signals at the "Dolby Level", +3 VU, receive no signal modification at all. Between the two limits, a varying level of pre-emphasis is applied. On playback, the opposite process is applied, based on the relative signal component above 1 kHz. Thus, as this portion of the signal decreases in amplitude, the higher frequencies are progressively increasingly attenuated, which also reduces in level the constant background noise on the tape when and where it would be most noticeable.
The two processes are intended to cancel each other out as far as the actual recorded program material is concerned. During playback, only de-emphasis is applied to the incoming off-tape signal and noise. After playback de-emphasis is complete, the apparent noise in the output signal is reduced, and this process should not produce any other effect noticeable to the listener other than reduced background noise. However, playback without noise reduction produces a noticeably brighter sound.
The correct calibration of the recording and playback circuitry is critical in order to ensure faithful reproduction of the original program content. The calibration can easily be upset by poor-quality tape, dirty or misaligned recording/playback heads, or using inappropriate bias levels/frequency for the tape formulation, as well as tape speed when recording or duplicating. This can manifest itself as muffled-sounding playback, or "breathing" of the noise level as the volume level of the signal varies.
On some high-end consumer equipment, a Dolby calibration control is included. For recording, a reference tone at Dolby Level may be recorded for accurate playback level calibration on another transport. At playback, the same recorded tone should produce the identical output, as indicated by a Dolby logo marking at approximately +3 VU on the VU meter. In consumer equipment, Dolby Level is defined as 200 nWb/m, and calibration tapes were available to assist with the task of correct level setting. For accurate off-the-tape monitoring during recording on 3-head tape decks, both processes must be employed at once, and circuitry provided to accomplish this is marketed under the "Double Dolby" label.
Dolby A
Dolby A-type noise reduction was the Dolby company's first noise reduction system, presented in 1965. It was intended for use in professional recording studios, where it became commonplace, gaining widespread acceptance at the same time that multitrack recording became standard. The input signal is split into frequency bands by four filters with 12 dB per octave slopes, with cutoff frequencies as follows: low-pass at 80 Hz; band-pass from 80 Hz to 3 kHz; a high-pass from 3 kHz; and another high-pass at 9 kHz. The compander circuit has a threshold of −40 dB, with a ratio of 2:1 for a compression/expansion of 10 dB. This provides about 10 dB of noise reduction increasing to a possible 15 dB at 15 kHz, according to articles written by Ray Dolby and published by the Audio Engineering Society and Audio.As with the Dolby B-type system, correct matching of the compression and expansion processes is important. The calibration of the expansion unit for magnetic tape uses a flux level of 185 nWb/m, which is the level used on industry calibration tapes such as those from Ampex; this is set to 0 VU on the tape recorder playback and to Dolby Level on the noise reduction unit. In the record mode, a characteristic tone generated inside the noise reduction unit is set to 0 VU on the tape recorder and to 185 nWb/m on the tape.
The Dolby A-type system also saw some use as the method of noise reduction in optical sound for motion pictures. In 2004, Dolby A-type noise reduction was inducted into the TECnology Hall of Fame, an honor given to "products and innovations that have had an enduring impact on the development of audio technology."
Dolby B
Dolby B-type noise reduction was developed after Dolby A, and was introduced in 1968. It consisted of a single sliding band system providing about 9 dB of noise reduction, primarily for use with cassette tapes. It was much simpler than Dolby A and therefore much less expensive to implement in consumer products. Dolby B recordings are acceptable when played back on equipment that does not possess a Dolby B decoder, such as inexpensive portable and car cassette players. Without the de-emphasis effect of the decoder, the sound will be perceived as brighter as high frequencies are emphasized, which can be used to offset "dull" high-frequency response in inexpensive equipment. However, Dolby B provides less effective noise reduction than Dolby A, generally by an amount of more than 3 dB.The Dolby B system is effective from approximately 1 kHz upwards; the noise reduction that is provided is 3 dB at 600 Hz, 6 dB at 1.2 kHz, 8 dB at 2.4 kHz, and 10 dB at 5 kHz. The width of the noise reduction band is variable, as it is designed to be responsive to both the amplitude and the frequency distribution of the signal. It is thus possible to obtain significant amounts of noise reduction down to quite low frequencies without causing audible modulation of the noise by the signal.
From the mid-1970s, Dolby B became standard on commercially pre-recorded music cassettes even though some low-end equipment lacked decoding circuitry. Most pre-recorded cassettes use this variant. VHS video recorders used Dolby B on their linear stereo audio tracks.
Prior to the introduction of later consumer variants, cassette hardware supporting Dolby B and cassettes encoded with it would be labeled simply "Dolby System," "Dolby NR", or wordlessly with the Dolby symbol. This continued in some record labels and hardware manufacturers even after Dolby C had been introduced, during the period when the new standard was relatively little-known.
JVC's system, used in place of Dolby B on earlier JVC cassette decks, is considered compatible with Dolby B. JVC eventually abandoned the ANRS standard in favor of official Dolby B support; some JVC decks exist whose noise-reduction toggles have a combined "ANRS / Dolby B" setting.
Dolby FM
In the early 1970s, some expected Dolby NR to become normal in FM radio broadcasts and some tuners and amplifiers were manufactured with decoding circuitry; there were also some tape recorders with a Dolby B "pass-through" mode. In 1971 WFMT started to transmit programs with Dolby NR, and soon some 17 stations broadcast with noise reduction, but by 1974 it was already on the decline. Dolby FM was based on Dolby B, but used a modified 25 μs pre-emphasis time constant and a frequency-selective companding arrangement to reduce noise.A similar system named High Com FM was evaluated in Germany between July 1979 and December 1981 by IRT, and field-trialed up to 1984. It was based on the Telefunken High Com broadband compander system, but never introduced commercially in FM broadcasting. Another competing system was FMX, which was based on CX.