Moore’s Law

Subscribe
Apple | Spotify | Amazon | Player.FM | TuneIn
Castbox | Podurama | Podcast Republic | RSS | Patreon


Podcast Transcript

In 1965, the director of research at Fairchild Semiconductor, Gordon Moore, made a prediction about the future of semiconductors. He said that over the next ten years, the number of transistors on an integrated circuit would double every two years.

His prediction didn’t just hold true for the next 10 years, but it has held true for almost 60 years, and it had driven the global computer industry. 

Learn more about Moore’s Law and why computers keep getting better, on this episode of Everything Everywhere Daily.


Many of you are probably familiar with Moore’s Law or have at least heard of it. For those of you who aren’t familiar with it, let me restate what it is.

Moore’s Law stipulates that the number of transistors on an integrated circuit will double approximately every two years. 

Despite the name, Moore’s Law isn’t a physical law of nature like the Law of Thermodynamics. Moore’s Law is more of an observation and a prediction than a hard and fast rule. 

In fact, as we’ll see in a bit, eventually Moore’s law will have to end at some point because as you keep doubling things, the numbers eventually start colliding into the laws of physics.

Before we get into that, we need to know where Moore’s Law came from, and for that, we need to go back to the start of computing and vacuum tubes.

The huge computers which existed after World War II all used vacuum tubes. Vacuum tubes were about the size of a light bulb, give or take, and were used to perform basic electronic functions such as signal amplification and switch electric currents. They were big and used a lot of power.

They were improved upon with the development of the transistor in 1947 which was much smaller and consumed much less power. A transistor could be as small as a safety pin or a button. They allowed electronic devices like radios to shrink from the size of living room appliances to something you could hold in your hand. 

The next big advance occurred in 1958 by Jack Kilby and Robert Noyce of Texas Instruments who realized you could create all the parts of a transistor on a single piece of a semiconductor such as germanium or silicon. A semiconductor is, as the name suggests, a material that has a level of resistance between a conductor and an insulator. 

One of the first companies to commercially take advantage of both transistors and integrated circuits was Fairchild Semiconductor, which was located in Palo Alto, California. Fairchild was one of the big reasons why Santa Clara Valley became known as Silicon Valley.

One of the co-founders of Fairchild was Gordon Moore. His position in the company was director of research. 

In the early 1960s, there were several advances in integrated circuits which had profound implications. Perhaps the biggest was the creation of the MOSFET or metal–oxide–semiconductor field-effect transistor. This was created at Bell Labs, which I did a previous episode on.

It became apparent to many people that these advances would allow for many very small transistors to fit on a single silicon chip, and consume significantly less power than the circuits which came before it. 

In 1965, Moore was asked to contribute an article to Electronics Magazine for their 35 anniversary. His subject was to be the future of the semiconductor industry. 

He wrote a very short article titled “Cramming More Components Onto Integrated Circuits”. In this article, he wrote, 

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.

The implication of Moore’s prediction was that by the year 1975, there would be as many as 65,000 transistors on a single integrated circuit. 

When 1975 rolled around, and Moore was now the President of a new computer chip manufacturer called Intel, his prediction proved to be accurate. 

It was at this time that a Caltech professor by the name of Carver Mead coined the term “Moore’s Law”.  

While Moore’s Law wasn’t really a law, it did set expectations and goals for the entire computer industry that computing power would grow exponentially.

I want to take an aside here to explain just how insanely powerful exponential growth is. Moore’s Law states that the number of transistors would DOUBLE every two years. 

Imagine you have a chessboard. On the first square of the chessboard, you put a single grain of wheat. On the second square, you put two grains of wheat, and on the third square, you put four grains of wheat. You continue doubling the number of wheat grains on each square from 8 to 16 to 32, etc. 

How many wheat grains will be on the chessboard by the time you get to the last square?

The answer is over 18 quintillion, which is over 2,000 times the annual global production of wheat. 

Exponential growth is insanely powerful. 

Some people grasped the implications of Moore’s Law very early on. If processing power kept doubling every two years, that meant an order of magnitude increase in power approximately every 7 years. 


It also meant that the cost per transistor would quickly be driven down to near zero. 

That meant that expensive mainframe computers which took up an entire room would one day fit on a desk, and maybe even in your pocket. 

As it turned out, Moore’s Law surpassed even the expectations of Moore himself. Initially, he didn’t think it would continue past the mid-70s, and even then he thought the growth wouldn’t continue past the early 80s. 

However, it has never really stopped. Technical development has kept pace with Moore’s Law up until today. 

The number of transistors on an integrated circuit went from about 2,000 in 1965, to 60 billion today. In fact, some ultra-large specialty chips now can be created with 2.6 trillion transistors. 

The cost of a transistor has correspondingly gone down exponentially as well. A single transistor in 1960 might have cost $8 in inflation-adjusted currency. Today, the price of a single transistor would be approximately one ten-billionth of a dollar. 

How does this work? How have they been able to keep doubling the number of transistors on a chip every two years?

There is no one secret. All of the advancements made over the last 55 years since Moore’s prediction have been the cumulative result of thousands of incremental improvements. 

One of the big things has been the ability to shrink the size of transistors. In 1970, the size of a transistor was approximately 10 microns or micrometers. The size of a transistor on the M1 chip developed by Apple which you can buy today is only 5 nanometers. That is an over 2,000 fold improvement. 

One of the techniques developed to create ever-smaller transistors was call photolithography. Light was literally used to burn circuits onto a silicon chip. However, as transistors got smaller, they ran into a problem when the transistors were smaller than the wavelength of ultraviolet light, which is approximately 10 nanometers. 

This is on top of advancements in material science to make better substrates for chips, and even the development of ultra-pure clean rooms, because even a single grain of dust could ruin a chip if it landed on it. 

The exponential growth of Moore’s Law will eventually bump into physics. The width of a silicon atom is only 0.2 nanometers, and there are chips in development that will have transistors that are 3 nanometers in size. There is only so much more that can be done to shrink the size of transistors. 

However, while the literal interpretation of Moore’s Law might be near its end, that doesn’t mean computing power can’t keep increasing. 

One of the assumptions behind Moore’s Law is that an integrated circuit has a two-dimensional layout. There is no reason why they have to be that way. 

Moore’s Law so far has been about adding more rooms to the single story of a building. You can add more rooms by just adding more floors to the building. 3D processors could increase the performance of current computer chips by over 1,000 fold, and it wouldn’t necessarily require smaller transistors.

Likewise, performance could increase by completely rethinking what a computer is. Instead of an integrated circuit, it could be something like a quantum computer, the discussion of which I’m going to leave for a future episode. 

In addition, we could see breakthroughs in materials, such as high-temperature superconductors, and maybe even computers that use light instead of electricity. There has also been a movement to creating single-purpose chips which do one thing and do them very well. 

You might be wondering if we know that computing power is going to double every two years, why don’t we just jump ahead a couple of iterations. 

The answer is, it doesn’t work that way. Back in the 1980s, the Japanese government tried to do just that. They created a moonshot project called the Fifth Generation project which was designed to totally leapfrog all known computer technology and create something new.

The project was a dismal failure. 

As I mentioned before, advancements in computing power doesn’t come about from one big innovation. They come about from many small innovations.

Back in 2005, engineers at Intel had no clue how they could create transistors smaller than 100 nanometers. Many of them felt it was a physical barrier that couldn’t be overcome. 

Needless to say, it was possible and the computer chip which you are using to listen to my voice right now is probably smaller than that. 

All these incremental improvements which help keep Moore’s Law alive do come at a cost, however. 

There is a corollary to Moore’s Law which is often known as Moore’s Second Law. 

Moore’s Second Law states that the price of a chip fabrication plant doubles every four years. 

Today the cost of building a new chip fabrication facility can cost $16 billion dollars, and future facilities are being quoted at $20 billion.

If you do the math, if the cost of a facility doubles every four years, and the power of a processor doubles every two years, the cost of computing power can still down exponentially. 

You might have heard news reports recently about a shortage of computer chips. This is indirectly a result of Moore’s Second Law. 

The number of facilities in the world that can create state-of-the-art computer chips is shockingly small, and many of them are located in Taiwan. 

The creation of chip fabrication facilities requires special machines which are only made by a few companies on the planet, and they are really expensive.

That is why “build more chip fabrication plants” is something that is easier said than done. To do so requires tens of billions of dollars, and the facilities take years to build because they are so complex. 

Gordon Moore is still around at the age of 92. He himself has been astonished that his prediction for the growth of computing power has held up for as long as it has. 

Even if Moore’s Law breaks down, which it eventually has to, there is still room to increase the power of computers for decades to come.