Even though the standards of the high-frequency pre-magnetization or a.c. bias became more or less sacrosanct around World War II, does it truly represent the most important factor in the serious cassette tape recordists’ life?
By: Ringo Bones
The newer hi-fi generation weaned on better CD players and the recent vinyl LP revival may be too young to remember, but there was a time when recording bias level – i.e. the alternating current used in electromagnetic preconditioning of tape-based analog audio tape recording was the be-all-end-all of the serious cassette tape recordists’ life. Around the time when B-List actor Ronald Reagan just got nominated by the US Republican Party to run as the next president of the United States and the Ayatollah Khomeini was too busy enjoying the spoils of his “Islamic Revolution”, some upmarket cassette tape decks started sporting 4-bit microprocessors that automatically optimize blank cassette tape and cassette tape deck co-performance beyond the Normal Bias / Type-I, High Bias / Type-II, Ferrichrome / Type-III, Metal/Type-IV selector switch of the typical cassette tape deck at the time. And many enthusiasts at the time who press cassette tape for high fidelity music recording use eventually found out that even a little too much applied recording bias level can cause erasure of high frequencies – the very spectrum that tends to give life to recorded music. Given that tape recording bias level for all intents and purposes could be considered at the time as the most important factor in the serious cassette tape audio recording, what is it and what makes it so special?
Around World War II, the manufacturer of the AEG/Magnetophon R22 magnetic tape recorder for recording and broadcasting more or less made the standard for setting bias levels for all tape-based analog audio recording. Tape bias alternating current frequency is usually set at least five times the highest audio frequency to be reproduced in order to minimize the audible interaction between the pre-magnetization a.c. bias and the harmonics of the highest audio frequencies – a sort of “Nyquist Criterion” for analog audio tape-based recording. Thus if you seek a tape recording frequency response that goes up to 20,000 Hz, the bias should be at least 100,000 Hz. And some upmarket tape decks manufactured between the late 1980s and mid 1990s have their record bias frequency set as high as 125,000 Hz!
During the cassette tape’s heyday, bias level has been the tricky consideration in the whole high fidelity music recording process, in large part because the optimum record bias level for a tape is largely a controversial issue. Depending on your criteria, you could choose a bias level – either manually or via your newfangled self-adjusting cassette tape deck’s built in 4-bit microprocessor – that: (1) maximizes the output MOL of the cassette tape at some reference frequency – usually 1,000 Hz; (2) minimizes third-harmonic distortion from your tape – for a test frequency of your choice, but often one in the vicinity of 1,000 Hz; (3) minimizes modulation noise; (4) minimizes intermodulation distortion in any of a number of two-tone tests; or (5) satisfies any of a number of “ideal” criteria for the relative useable output levels from the cassette tape at low and high frequencies.