Subscribe
Apple | Spotify | Amazon | iHeart Radio | Castbox | Podcast Republic | RSS | Patreon
Podcast Transcript
One of the most important inventions of the 20th century was the transistor.
Prior to the transistor, electronic devices were bulky and dependent on vacuum tubes. Vacuum tubes were large, fragile, power-hungry, and prone to failure.
The transistor not only replaced the vacuum tube in many applications but also enabled the miniaturization and reliability required for modern electronics, including computers, phones, and spacecraft.
Learn more about transistors, how they work, and how they were invented on this episode of Everything Everywhere Daily.
The late Mitch Hedberg had a joke that went, “Rice is great if you’re really hungry and want to eat two thousand of something.”
Instead of thousands of something, think of something that you own that you don’t have thousands of, but millions, or even billions of.
The answer to something that you have billions of is transistors. Modern computer chips have billions of transistors, and depending on the number of devices you have, it is possible you might own trillions of them.
The path to devices with billions of transistors started with the development of a single one.
This, of course, raises the question: What exactly is a transistor, and what does it do?
Before I get into what a transistor is, I should start with the technology that transistors were created to replace: vacuum tubes.
The vacuum tube was invented in the early 20th century as a breakthrough in controlling electrical signals. Its origins trace back to Thomas Edison’s discovery in 1883 of thermionic emission, the release of electrons from a heated filament, though Edison did not understand the implications of his discovery.
Building on this, John Ambrose Fleming invented the first true vacuum tube in 1904, called the Fleming valve, which acted as a diode and was used to detect radio signals.
A diode is an electronic component that allows electric current to flow in only one direction, acting as a one-way valve for electricity.
In 1906, Lee De Forest added a third element—a control grid—creating the triode, which could amplify weak electrical signals. This development revolutionized electronics, making long-distance telephony, radio broadcasting, and later computing possible.
Vacuum tubes served as the fundamental building blocks of electronic devices from the early 1900s until the mid-20th century. Their primary purpose was to control the flow of electric current, making them essential for three key functions: amplification, switching, and rectification.
Vacuum tubes could take a weak electrical signal and amplify it, making it stronger. This was vital for radios to boost faint signals from distant stations so they could drive a speaker.
Vacuum tubes could also act as electronic switches, which are necessary for binary logic in computers. A slight change in voltage could switch a much larger current on or off.
This allowed early computers, like the ENIAC, to perform calculations using thousands of vacuum tube switches to represent binary digits
Finally, vacuum tubes could convert alternating current into direct current, a process called rectification. This was crucial in power supplies for radios, televisions, and other devices.
As crucial as vacuum tubes were, they had severe drawbacks. They consumed a lot of power, generated heat, were physically large, and burned out quickly. As the demand for faster and more compact electronics grew, particularly during and after World War II, it became clear that a better alternative was needed.
It turned out the answer lay in a discovery made in the 19th century. In 1874, German physicist and electrical engineer Karl Ferdinand Braun discovered that certain crystalline materials could conduct electricity in only one direction.
This phenomenon, called rectification, laid the groundwork for understanding semiconductors. Braun noticed that metal contacts on crystals, such as lead sulfide, created what we now call a “crystal detector.” It was essentially a primitive diode.
In the early 1900s, these crystal detectors became crucial components in radio receivers. Engineers would use a thin wire called a “cat’s whisker” to make contact with a crystal, creating a device that could detect radio waves.
While primitive, these devices demonstrated the fundamental principle that would later enable transistors: the ability to control electrical current through carefully engineered materials.
The theoretical understanding of this phenomenon deepened in the 1920s and 1930s as quantum mechanics emerged. Scientists began to comprehend why certain materials behaved as semiconductors.
These materials had electrical properties that fell between conductors like copper and insulators like glass. This understanding proved essential for the deliberate engineering of semiconductor devices.
The transistor’s birth occurred at a place familiar to regular listeners of this podcast: Bell Labs… which, if you remember, invented everything.
It was there that three physicists, John Bardeen, Walter Brattain, and William Shockley, were investigating semiconductors in search of a replacement for vacuum tubes.
From a business standpoint, Bell Telephone needed something more reliable for its expanding telephone network.
On December 16, 1947, Bardeen and Brattain achieved a breakthrough. They placed two gold contacts very close together on a germanium crystal, with the crystal mounted on a metal base. When they applied voltage to one contact, they discovered they could control a much larger current flowing between the other contact and the base.
They had created the first point-contact transistor.
Think of this as controlling a large water valve with a small handle. A tiny signal could control a much larger flow. This amplification property made the transistor revolutionary.
The problem was that the point-contact transistor was fragile and difficult to manufacture consistently.
William Shockley, initially frustrated at being excluded from his colleagues’ breakthrough, worked to understand the underlying physics and develop a more practical design.
In 1948, he invented the junction transistor, which used layers of differently treated semiconductor material instead of point contacts. This design proved far more stable and easier to manufacture.
Shockley, Brattain, and Bardeen were awarded the Nobel Prize in Physics in 1956 for their work on the development of the transistor.
Just as a side note, John Bardeen won a second Nobel Prize in 1972 for his work in superconductivity. One of only five people to have ever been awarded two Nobel Prizes.
The key insight that the Bell Labs team made is that transistors operate through the movement of electrons and “holes”, holes being spaces where electrons are missing in specially treated semiconductor materials.
By carefully controlling the purity and treatment of these materials, engineers can create devices that switch between conducting and non-conducting states millions of times per second.
The transition from laboratory curiosity to commercial product required solving numerous manufacturing challenges. Bell Labs initially used germanium in its transistors, but this material had limitations; it was sensitive to temperature and difficult to purify consistently.
The manufacturing process involved growing single crystals of germanium and then carefully adding tiny amounts of impurities, called doping, to create the necessary electrical properties.
The first commercial transistor applications appeared in hearing aids around 1952. These devices benefited enormously from the transistor’s small size and low power consumption compared to vacuum tubes. The transistor industry experienced rapid growth, with companies such as Texas Instruments, Fairchild, and Motorola entering the market.
A crucial breakthrough came with the development of silicon transistors in the late 1950s. Silicon offered several advantages over germanium: it was more abundant, could operate at higher temperatures, and was easier to purify.
Gordon Teal at Texas Instruments pioneered the manufacturing of silicon transistors, creating devices that could withstand the harsh conditions in military and industrial applications.
Transistors changed almost all electronics. Transistors were small, cheap, and much more durable than vacuum tubes because they were solid-state.
Consider what it did to radios. Prior to the development of transistors, commercial radios were large. They were often the size of an appliance or a piece of furniture. A family would gather around the radio in the evening because the radio was too large to be brought to them.
New radios, called transistor radios, were very small and portable. Now it was possible to put a portable radio in your pocket and take it to the beach or to a ball game, where you could listen to the announcers while you watch the game.
Car radios, first introduced in the 1920s, became more affordable and widespread in automobiles.
However, the simple transistor developed by Bell Labs was the start.
The next major leap came when engineers realized they could fabricate multiple transistors on a single piece of semiconductor material. Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor independently invented the integrated circuit in 1958 and 1959.
The integrated circuit solved a growing problem called the “tyranny of numbers.” As electronic devices became more complex, requiring thousands of transistors, the task of connecting them all with individual wires became overwhelming. The integrated circuit allowed manufacturers to create all the transistors and their connections simultaneously using photographic and chemical processes in one compact package.
This development led to the creation of the planar process, which utilized flat silicon wafers instead of individual crystals. The planar process, developed primarily at Fairchild, enabled the simultaneous manufacture of thousands of identical transistors, dramatically reducing costs and improving reliability.
The logical extension of placing multiple transistors on a single chip was to create complete computing systems. In 1971, Intel released the 4004, the world’s first microprocessor, which contained approximately 2,300 transistors on a single chip. This device could perform the same calculations as room-sized computers from the 1940s.
The microprocessor represented a fundamental shift in how we think about computation. Instead of building specialized hardware for each task, engineers could now create general-purpose processors that could be programmed to perform virtually any calculation. This flexibility unleashed an explosion of innovation in computing applications.
Throughout the 1970s and 1980s, the semiconductor industry followed what became known as Moore’s Law, the observation that the number of transistors on a chip doubles approximately every two years, a subject I covered in a previous episode.
A crucial development was the widespread adoption of Complementary Metal-Oxide-Semiconductor (CMOS) technology in the 1980s. CMOS transistors consume power only when switching between states, making them ideal for battery-powered devices. This technology became the foundation for modern microprocessors, memory chips, and virtually all digital electronics.
The basic CMOS approach uses pairs of transistors – one that conducts when the input is high, and another that conducts when the input is low. This complementary design ensures that current flows only during transitions, resulting in a dramatic reduction in power consumption compared to earlier technologies.
Modern transistors have reached truly microscopic dimensions. Current cutting-edge processors use transistors with features measured in nanometers – billionths of a meter. To put this in perspective, if a transistor were scaled up to the size of a marble, a marble would be about the size of Earth.
The incredibly small scale of these transistors allows for more to be packed into smaller spaces, resulting in a higher density of transistors in common computing devices.
At the start of the episode, I said that almost all of you own billions of transistors, even if you don’t even know it. That is not an exaggeration.
The Apple A18 processor, which is used in devices like the iPhone?16, contains approximately 15.2 billion transistors, while the higher-end A18 Pro boasts around 18 billion transistors.
Desktop and laptop CPUs from Intel and AMD have a similar number of transistors.
That’s just the central processor. If you include graphics cards, like the Nvidia RTX?5090, you are talking about another 90 billion transistors.
Transistors literally changed the world. You could say that they are the foundation of modern civilization. Without them, almost every modern electronic device wouldn’t exist. Instead of listening to podcasts, you’d still be huddled around a giant vacuum tube radio in your living room.