Analog vs Digital audio systems

Some common misunderstandings and misconceptions clarified or debunked.
Analog vsdigital, record vs cd, valve (tube) vs solid state (transistor) and so on.
Post Reply
superphool
Posts: 32
Joined: Sat Jul 20, 2019 7:45 pm

Analog vs Digital audio systems

Post by superphool » Sat Aug 24, 2019 3:52 pm

There is a massive amount of misleading and untrue information about, relating to analog and digital audio, both how they "work" and the actual or possible sound quality of each.

This site's reason for existence is to try and dispel some of these myths and help people avoid getting ripped off and wasting money on either overpriced or under-performing equipment, cables & accessories.


The basics of analog and digital systems:

In an analog system, the wanted signal - eg. for audio, the changes in air pressure from sound vibrations, the movements of a microphone diaphragm those cause and the electrical signal that creates - are represented directly by a varying voltage.

Think of the changes that voltage with time as a "graph" - which is exactly what you see if you view an analog audio signal on an oscilloscope. You can see the same thing if you zoom in on an audio file in Audacity or a similar audio editing program.
Image



Any imperfections in the equipment that then processes or stores that varying voltage are added to it - noise - or slightly change it from the original shape - distortion.

And _every_ stage of any electronic system adds some level of noise or distortion, whether the change is perceptible or not. That is a basic fact of physics, there is no such thing as a transistor or valve/tube that amplifies or passes a signal with zero noise or distortion. Even passive components like resistors add a tiny amount of noise.

The important thing is to keep the noise and distortion as low as possible at every stage, and minimise the number of different stages and processes the audio passes through between recording and playback.


Each stage of an analog system can be thought of as something like a person re-drawing or tracing the sound waveform, very closely but not quite perfectly. The slightest jitters or imperfections in the copy the person - or transistor or valve - are noise and distortion, causing signal degradation at every stage and every process.


The more stages an analog signal passes through and the more times it is copied and reproduced, the more the quality degrades and the more hiss and distortion is heard when it is played back.


In a digital system, the waveform "graph" is converted to a sequence of points as early as practical in the system.

Imagine the sound waveform is on a fine grid and the level above or below zero being measured at every grid line along the time axis.

The audio becomes a list of numbers, which when plotted back to a graph at the same time scale can recreate the original shape from a series of closely spaced points, as long as the measurement samples are close enough together - a high enough sample frequency.


Those numbers are no different to any other kind of digital or computer data. They can be copied, stored, sent from device to device through any kind of data storage or cable system and they will be unchanged, just like copying and transferring text files or other documents.

Digital audio signal (or digital representations of any kind of analog data) do not degrade, get noise added or incur distortion, no matter how many stages of copying and reproduction they pass through.

That is one of the most fundamental and most important facts relating to digital audio.



Some audiophiles argue that the digitisation and conversion to points/numbers, then conversion back to an analog waveform is not perfect; the "join-the-dots" conversion back to analog does not perfectly reproduce the original analog signal that was digitised.

That is technically true - but also totally irrelevant, when dealing with "CD Quality" or higher digital standards.

The CD audio standard samples the analog voltage 44,100 times per second. That allows any audio frequency that can be heard by any normal adult human to be accurately stored and reproduced.

Frequencies above 20,000 Hz are filtered out before digitisation at that CD sample rate, to prevent false data points being included in the digital audio. However that filtering above hearing range so nothing is really lost.

When the audio waveform is recreated from the samples, the raw waveform has steps - but those are removed by again filtering frequencies above the audio range.


That's just for the basic CD Audio standard, which is now almost 40 years old - newer digital audio systems often function at higher sample rates, eg. 88.2 KHz, 96KHz or even higher, and would allow supersonic audio frequencies to be reproduced, IF they existed in the original audio before digitisation.



In the early days of computer-based digital audio and portable media players, storage systems and memory were small and expensive - meaning digital audio was often resampled at lower data rates, limiting frequency response, or had "lossy" compression applied, significantly reducing the quality compared to the original audio - sometimes to abysmally poor standards.

That was due to limitations in home computers & some portable audio equipment at the time and has no bearing on the fundamentals of digital audio or the quality of "CD standard" audio.

Post Reply

Return to “Basic facts”