Subscribe
Apple | Spotify | Amazon | iHeart Radio | Player.FM | TuneIn
Castbox | Podurama | Podcast Republic | RSS | Patreon
Podcast Transcript
If you’ve been around long enough, and by that, I only mean a couple of years, you have probably observed the one fundamental truth about computers: they always get faster.
While games and web browning might seem faster, the average person’s computer usage doesn’t necessarily express just how much more powerful computers have become.
In particular, for several decades, supercomputers have been developed that are vastly more powerful than what is on your desk or in your pocket. Unless that is, you make comparisons over time…
Learn more about supercomputers, the evolution of computing power and how your computer stacks up to supercomputers of the past on this episode of Everything Everywhere Daily.
The genesis of this episode has to do with a fact that I have heard many times. That your smartphone is more powerful than the computer that landed the astronauts on the moon.
I didn’t really doubt the fact, but I was curious as to the magnitude of the truth. Just how much more powerful are today’s smartphones compared to the computers used in the Apollo Lunar Module?
That led me down a rabbit hole of the most powerful computers in history, what we would today call supercomputers, and the shocking increase in computing power since the era of computers began.
Before we get into these powerful computers, we first have to have a way to compare one computer with another.
Many different benchmarks and metrics have been used throughout history. If you ever look at a review of a computer, you will see a host of tests that are run to evaluate performance.
What we need is something that can be used over time for many different types of computers.
In the past, you might have heard of computers that bragged about the number of transistors in a processor. In the 1990s and 2000s, many computers bragged about clock speed in terms of mega or gigahertz. Today, computers talk about the number of cores in their cups.
All of these things factor into computing power.
The metric usually used to measure computing power across all manner of computers is FLOPS, which stands for Floating Point Operations Per Second.
The question you now might have is, what is a floating point operation?
A floating-point operation is a computation involving floating-point numbers. Floating-point numbers represent real numbers that can have a fractional part, like 3.14 or -0.001, as opposed to integers, which are whole numbers.
Basically, FLOPS is the number of mathematical calculations a computer can perform each second, regardless the number of transistors, cores, or clockspeed.
There are other benchmarks out there for measuring real world performance, but this is a very good way to compare different types of computers across different periods of time.
I’m not going to cover every computer which came out over the years, but rather highlight several computers that illustrate the growth in computing power.
So, with that, let’s start with the power of the very first computer, the Electronic Numerical Integrator and Computer or ENIAC, which was built in 1946.
ENIAC was the first general purpose computer and it was built using vacuum tubes before integrated circuits or transistors were invented. It was built by the University of Pennsylvania and it was used to more quickly perform calculations for artillery estimates and engineering projects.
ENIAC had a power of approximately 500 FLOPS. I’ve seen estimates that were lower and higher, but as we’ll see, that difference is inconsequential considering the comparisons we’ll be making.
Now let’s move forward to 1952. IBM released the IBM 701 computer. While IBM had been in the business machine game for decades, this was IBM’s first commercial scientific computer, used primarily by government and scientific institutions.
The 701 had a power of approximately 16,000 FLOPS, or 16 kiloflops. In just six years, computing power had increased over 32-fold.
If you think a 32 fold increase is impressive, in the words of Bachman Turner Overdrive, you aint seen nothing yet.
In 1962, IBM released the IBM 7030 Stretch, their first computer to use transistors instead of vacuum tubes. This technical change ushered in a massive jump in computing power.
The 7030 Stretch was one of the first computers to achieve one million flops, or one megaflop.
Just three years later, in 1964, the Control Data Corporation released its flagship computer, the CDC 6600.
The CDC 6600 was designed by a Control Data Employee named Seymore Cray. A name we will hear more from in a moment.
The CDC 6600 had a computing power of 3 megaflops, a threefold increase in power over the IBM 7030 Stretch. This was the first computer which was ever called a supercomputer.
Unlike the IBM 7030 Stretch, it used an integrated circuit as a central processor.
The cost at the time was $2,370,000. Adjusted for inflation, it would cost about $24 million dollars today.
The CDC 6600 held the title as the world’s fastest computer for five years until it was replaced by CDC 7600 which reach a performance of 10 megaflops.
In 1972 Seymour Cray left Control Data Corporation to start his own company, Cray Research and then later the Cray Computer Corporation.
In 1975 Cray released the Cray-1 supercomputer. It was a revolutionary new computer and a massive jump in computer power. It had a computing power of 160 megaflops, the first computer to break the 100 megaflop barrier.
It was sold for $7.9 million in 1977, which would be over $40 million today.
The Cray-1 was very successful, with over 100 units sold to organizations such as Los Alamos National Labs.
With every new model of supercomputer, Seymour Cray attempted to get at least a ten fold improvement in computing power.
He did just that with the release of the Cray-2 in 1979.
The Cray-2 introduced liquid cooling to improve performance, and it had a computing power of 1.9 billion FLOPS or 1.9 GFLOPS.
It sold for $16 million dollars at its release, which would be worth almost 70 million dollars today.
Cray lost the title of most powerful computer in 1986 when ETA Systems, a spinoff of Control Data Corp released the ETA-10. The ETA-10 had a performance of 10 gigaflops.
The ETA-10 was powerful, but it didn’t sell well.
In 1992, Intel released the Intel Paragon. The Paragon was different in that it was a massively parallel computer that used a large number of simpler chips and divided up problems to be calculated amongst the processors.
The Intel Paragon had a power of 143 gigaflops.
In 1997, the US government’s Accelerated Strategic Computing Initiative, or ASCI, unveiled the ASCI Red computer. It was designed for nuclear weapons testing as underground testing had been banned by treaty.
It had a computational power of 1.8 trillion FLOPS or 1.8 terraflops, becoming the first supercomputer to break the terraflop barrier.
Five years later, in 2002, the NEC corporation in Japan created the Earth Simulator, which had a power of 35 teraflops.
Six years later, in 2008, IBM retook the title of the world’s most powerful computer with the IBM Roadrunner. They didn’t just break the 100 teraflop barrier. The Roadrunner had a power of 1,105 trillion flops or 1.1 quadrillion FLOPS, or, to put it most succinctly, 1.1 petaflops.
In 2011, the K Computer by the Fujitsu Corporation reached 10.51 petaflops.
In 2018, the IBM Summit reached 200 petaflops.
The current reigning champion for most powerful computer in the world, at the time my recording this episode, is the Frontier Supercomputer operated by the Oak Ridge National Laboratory, which was operational in 2022.
It has a theoretical top performance of 1.1 quintillion FLOPS or 1.1 exaflops.
So, just to summarize and put this into perspective, the world’s most powerful computer today, the Frontier is approximately 10 quadrillion times more powerful as the ENIAC which was released almost 80 years ago.
The improvement in performance is so great that we have to resort to numbers such as quadrillions and quintillions, which we otherwise never have to use.
I should add that most of the supercomputers over the last several decades have been a battery of smaller computers. If fact, if you look at photos of the Fronteir supercomputer, it looks very similar to what you would find a server room or a server farm, the likes of which run large websites like Netflix or Facebook.
The big difference is that a supercomputer is designed to tackle a single problem.
With all of this, I want to go back to my original questions at the top of the episode.
Let’s start with the power of the computer that was used to land on the moon.
The computer used in the Apollo Program was the Apollo Guidance Computer. It was created to meet the specific parameters of the Apollo program with regard to power and weight. It got the job done, but it wasn’t very powerful.
The Apollo Guidance Computer had a total computing power of 85 kiloflops. So it was more powerful than the 1952 IBM 701 computers, but dramatically less powerful than the 1961 IBM 7030 Stretch.
Most of use will never use a supercomputer and certainly not the Apollo Guidance Computer. Although as an aside, there are some people in this audience who probably do use supercomputers.
Most you are familiar with personal computing devices of some sort. So how do they stack up against supercomputers of the past?
The Apple I personal computer, released in 1976 had a power of 60 kiloflops. Not quiet as powerful as the Apollo Guidance Computer, but it was something computer enthusiasts could use.
The first real personal computer, the IBM PC, was released in 1981 and had a power of 330 kiloflops. It is not as powerful as the IBM 7030 Stretch released 20 years earlier, but it sold for $1,565 instead of $7.7 million.
The original Intel Pentium computer, released in 1993, had a power of approximately 60 megaflops, depending on configuration. This would have put it on a par with an early 1970s supercomputer, again, at a fraction of the price.
The 1997 Pentium II had a power of approximately 350 megaflops, again depending on configuration, and the 1999 Pentium III had a power of 1.3 gigaflops.
Once again, there is about a 20-year lag between the Cray-2 and the Pentium III.
In the 2000s, Intel began to sell multi-core processors. The 2006 Intel Core 2 Duo could achieve a power of 20 gigaflops. This was twice as powerful as the 1986 ETA-10 supercomputer. More powerful, but still within an order of magnitude.
The 1992 Intel Paragon supercompute achieved 143 gigaflops. Twenty years later, in 2012, the Intel Core i7 achieved approximately 100 gigaflops.
The 2020 Apple M1 processor, on which I record this podcast, has power of about 2.6 TFLOPS. That is more powerful than he 1997 ASCI Red supercomputer.
So, there has roughly been a 20-year lag between supercomputers and desktop computers. That lag is getting slightly longer just because the rate of supercomputer power increased when they began becoming massively parallel, and they could just throw smaller computers at the problem to increase power.
What about the smartphone in your pocket? Are you carrying a supercomputer around with you?
The first Apple iPhone, released in 2007, had a power of 5 megaflops.
The power increased quickly. By the iPhone 4 in 2010, it had reached one gigaflop.
Two years later, in 2012, the Samsung Galaxy S3 reached ten gigaflops, and five years after that, in 2017, the iPhone X reached 200 gigaflops.
My personal smartphone, an Apple iPhone 15 Pro Max, their current version as of the time of recording, had a computing power of 2.15 teraflops.
The Samsung Galaxy S23 Ultra has a similar amount of computing power.
So, do you have a supercomputer in your pocket?
The original Jurassic Park movie released in 1993 had its CGI effects created on three Cray X-MP supercomputers, each of which had a power of about 800 megaflops.
So modern smartphones have a power over 100x greater than the supercomputers that made the movie.
As for the computer that landed on the moon, whatever desktop computer or smartphone you might have, it is vastly more powerful. An iPhone 15 Pro is about 25 million times more powerful than the Apollo Guidance Computer.
If there now is an approximately 25-year lag now between the world’s most powerful supercomputer and a smartphone, does that mean in the year 2049, everyone will have smartphones that can perform over an exaflop of calculations?
Maybe. In fact, probably.
To say that computing power has increased “a lot” is actually a bit of an understatement. Over the last 80 years computing power has exploded, which is why we now all carry supercomputers in our pockets.