A Brief History of Digital Audio

Apple | Spotify | Amazon | Player.FM | TuneIn
Castbox | Podurama | Podcast Republic | RSS | Patreon

Podcast Transcript

Right now, you are listening to the sound of my voice on some sort of digital audio device. In fact, almost all of the audio you consume today was digitally recorded or edited at some point in the process.

But sound is inherently analog. How does sound, the movement of air, become converted into 1s and 0s? 

…and once sound is digitally converted, how is it distributed, and how has the digitization of sound changed the business of music and audio?

Learn more about digital audio, how it works, and how it changed how we consume audio on this episode of Everything Everywhere Daily.

Pretty much all recorded audio nowadays is digital or has been digitized in some form. Even if you purchase a vinyl record, the playback mechanism might be analog, but the original recording or mixing was probably done digitally. 

The words you are listening to right now were digitally recorded and edited, and you are listening to it on some sort of digital device, most probably a smartphone. 

So, how does digital audio work? 

We first have to start with the basics of what sound is. Sound is air pressure in the form of waves. You’ve probably seen a sound wave before. It is a pattern of up and down waves that are in a sinusoidal pattern. 

The key thing to understand about sound waves, and the thing that defines what anything analog is, is that they are continuous. 

If something is continuous, it means that there are no gaps. Imagine drawing a line on a piece of paper with a pen. At every point in the line, there is ink. There are no gaps between any two points. 

If something is not continuous, then it is discrete. Unlike something continuous, something discrete can be broken down into a finite number of values, steps, or objects. 

Imagine instead of drawing a line on a piece of paper; you drew a series of very small dots in a line. There would be gaps between the points, making it discrete. However, if you drew enough dots that were very small and close together, it would appear to be a continuous line.

This distinction between continuity and discreteness is the difference between analog and digital. 

How, then, do you convert something analog and continuous into something digital and discrete? 

The answer has something to do with the example I gave above of drawing a line. If you can draw a discrete line with enough points, it will appear to an observer as a continuous line. 

So now, let’s go back to our sound wave. If we could capture a continuous sound wave and draw it on a piece of paper, how could we turn this into something discrete and digital? 

If we put the wave on the horizontal x-axis, we could approximate the shape of the wave by making rectangles from the x-axis to the wave. Depending on how many rectangles we use, it might look rather blocky, but it would get the overall shape. 

However, if we used smaller and smaller rectangles, it would eventually look like a very smooth, continuous curve. 

The process of turning something analog into something digital is known as sampling. 

Surprisingly, sampling actually goes back well before the age of computing. It was first proposed to be done mechanically to put multiple telegraph transmissions on a single line. A technique known as multiplexing.

In the 1920s, a technique developed called Pulse-code modulation. Pulse-code modulation is basically a way actually to do what I just described. It takes pulses of an analog signal and measures each pulse so the wave can be broken apart and transported.

The first systems were designed for facsimile systems that could send simple images over telegraph and telephone lines. 

In 1937, pulse-code modulation was used for the first time on voice signals by a British engineer named Alec Reeves. Even though he received patents for his invention, there was no real practical use for it at the time. 

Eventually, computers arose, and the applications for Pulse-code modulation with computers became obvious as computers could easily handle discrete digital information.

However, computers were initially only involved in the transmission of signals, not the recording of digital audio signals. 

In 1967, Japan’s national broadcasting company, NHK, developed a 30 kHz, 12-bit device that could encode an audio signal and store it on videotape.

Here, I should explain what those numbers mean: 30 kHz and 12-bit. Every time there is a pulse in pulse-code modulation, a measurement is taken. The number of bits is basically the number of different possibilities for each measurement. This is known as bit depth. 

For example, if there are 8-bits, each measurement can have one of 256 possible values. This is sort of like how many colors an individual pixel can have on a computer screen. 

At 16-bits, there can be 65,536 values, and 32-bits can have 4,294,967,296 distinct levels. 

30 kHz simply measures the number of pluses that are made, so in this case, there were 30,000 pulses measured each second. This is known as the sample rate. 

So, the system built by NHK in 1967 sampled a sound wave 30,000 times a second and saved each sample as a number with 12 1’s and 0’s. 

Both the bit depth and sample rate go into determining the amount of data recorded and the quality of the audio. 

While digital audio recording was possible as early as 1967, there was a big problem. 

All that sampling took a lot of computational power, and all that data required what was, at the time, an enormous amount of storage. 

Digital audio simply wasn’t practical given the computing power at the time. However, thanks to Moore’s Law, the price of computing kept dropping dramatically and by the late 1970s the first commercial digital recordings were created and released. 

Sony and 3M created prototype digital recording systems and released the first commercial Pulse-code modulation system.

However, the real revolution in digital audio was unveiled on March 8, 1979, when the Philips corporation unveiled the prototype of their compact disk player. The first prototype CD was a recording of Antonio Vivaldi’s The Four Seasons.

In July of that year, Warner Brothers released the first digitally recorded record, Bop Till You Drop by guitarist Ry Cooder. 

The compact disc, hereby just known as a CD, wasn’t just a creation of one company. It was primarily a joint effort between Phillips and Sony, who were both working on similar laser based digital audio systems. 

If they had come out with competing system, it would be another VHS/Betamax fiasco and digital audio might have been stillborn. 

In the creation of a standard, they had to agree on several things, including the bit depth and sampling rate. What they eventually agreed upon was a sample rate of 44.1 kHz, and a bit depth of 16-bits on each stereo channel. 

The sample rate wasn’t arbitrarily chosen. Sound is in waves, so it has a frequency. The limits of human hearing goes from approximately 20 hertz to 20 kHz. 

There is a thing known as the Nyquist rate which states that for the accurate reconstruction of an analog wave form, you have to sample the the wave at a rate twice that of the highest frequency of the sound. 

So, 44.1 kHz was selected as a sample rate because it is more than twice that of the highest frequency any human can reasonably hear. So the highest sound frequency that a CD can play is 22.05 kHz.

This was relatively easy to decide because it was based on a human limit. Sony wanted 44.1 kHz and Phillips wanted 44 kHz.

There was a great deal of debate about the bit depth. Phillips wanted 14 bits and Sony wanted 16. In the end, Sony won out, as they had the higher quality proposal for both bit depth and sample rate.

They also eventually decided on a disc that was 120mm in diameter and could hold 74 minutes of music. Phillips initially wanted a 115mm disc, but Sony insisted on a 120mm disc, supposedly so it could hold the 1951 recording of Wilhelm Furtwängler conducting Beethoven’s Ninth Symphony at the Bayreuth Festival.

That story has been declared by some to be aprophical, but others who worked at Sony at the time claim it was true, but it had nothing to do with Beethoven. It had to do with the fact that Phillips had a plant ready to produce 115mm discs, and Sony did not. 

By forcing the larger disc, it required Phillips to retool its plant, giving Sony time to catch up.

The longer play time also gave it a significant advantage over vinyl long-play records, which could only hold 22 minutes of music per side. 

The CD was introduced to the world in 1981, and the first discs were commercially available in October 1982. 

Compact disc exploded, and by 1987, for the first time, digital music sales surpassed vinyl analog sales. 

Despite other digital music formats that were released, such as digital tape and CD singles, the 120mm CD was by far the most popular digital format for music. 

The average CD could store about 700 to 800 megabytes. That might not seem like much today, but 25 years ago, that was a lot. Personal computers and the internet began to expand in the 90s and people began copying CDs to their computer hard drives. 

In the late 90s, computers usually had hard drives measured in the low gigabytes.  A single CD could take up a large amount of space on a hard drive. The reason why CDs took up so much space is because the data was uncompressed. 

The solution to the storage problem was compressing the data. In particular, the solution was an algorithm developed in Germany known as MPEG-1 Audio Layer III, or as it is better known, MP3. 

MP3 could shrink the size of an audio file by a factor of 5 to 1 to 10 to 1. The amount of compression is determined by how complex the sound file is. 

With smaller MP3 files, it became possible to transfer music even on slow dial-up internet connections. 

Because digital files can be shared without the loss of the original file, the sharing of digital music became rampant. In 1999, a service was launched that organized this sharing called Napster. 

Napster, for a short time, revolutionized music, and it didn’t make the music companies happy. It allowed for peer-to-peer sharing of MP3 files. Anyone could share music with anyone else. 

It came as no surprise that Napster was shut down in 2001.

However, that was hardly the end of digital music and the internet. At about the same time Napster shut down, Apple released the iPod. The iPod wasn’t the first MP3 player, but it popularized digital music that didn’t require a disc.

This opened up the door to direct digital music sales and eventually to to all you can listen to digital music streaming.  One of the largest players in music streaming, Spotify, was launched in 2006.

Of course, digital audio wasn’t just about music. It allowed anyone to record audio and to distribute it globally with close to zero cost. 

As early as 2000, I was doing streaming from my desktop computer on a program called Winamp. I’d stream commentary playing EverQuest as well as music. At that time, I was also part of an online streaming show called All Game Radio that had hundreds of people listening simultaneously. 

This was all before the launch of podcasting in 2003, the history of which I have covered in a previous episode. 

I’d like to conclude with one thing that caused a lot of controversy over the years. 

Is analog sound superior to digital? 

There are some audiophiles who insist that analog recordings are superior to digital. They will often say it sounds “warmer” whatever that means. 

This is a subject that has been tested to death, and obviously, there are many factors that go into the quality of a recording. 

However, time and time again, it has been found that people cannot tell the difference between digital and analog sound, especially at high sample rates and bit depth. Even self-proclaimed audiophiles can’t tell the difference better than flipping a coin. 

The fact is, if the sampling is high enough, whatever difference exists is beyond the capability for humans to notice. 

While there has been a small resurgence in vinyl records, most of the reason for the renaissance has to do with the physical aspect of owning a record, not the sound. 

The quality, affordability, and ease of use will ensure that digital audio will be with us for a very long time. 

The Executive Producer of Everything Everywhere Daily is Charles Daniel.

The associate producers are Peter Bennett and Cameron Kieffer.

Today’s review comes from listener The 1 and only Smarty Pants over on Apple Podcasts in the United States. They write:


Have you ever considered doing an episode on fencing and the history of it? Great podcast.

Thanks, Smarty Pants! I think fencing would be a great topic for a podcast, Fences and fence building, aka fencing, have played an important role in history. Now that I think about it, I could also do an episode on selling stolen goods and competitive sword fighting.

Remember, if you leave a review or send me a boostagram, you too can have it read on the show.