Origins of the Internet

Subscribe
Apple | Spotify | Amazon | iHeart Radio | Castbox | Podcast Republic | RSS | Patreon


Podcast Transcript

If you are listening to my words right now, then you are obviously an internet user. 

The internet has arguably been the most transformative technology of the last fifty years. 

But it wasn’t developed overnight or all at once. It was a gradual process to solve specific problems, and no one knew at the time that it would become the basis of a global network of computers. 

Learn more about the origins of the Internet and how it was created on this episode of Everything Everywhere Daily. 


Almost everyone uses the Internet in some form every day, yet most people have never considered where it came from or how it works. 

There is an argument to be made that the internet, when taken as a whole, and all of the information that it contains and all of the communication that it facilitates, is the greatest thing ever built in the history of humanity. 

But it didn’t start that way. The internet had rather modest goals to begin with, even though a few visionary people knew from very early on just what potential it held. 

The computing landscape in the 1960s was radically different from what we know today. Computers were enormous, expensive, and rare, typically housed in government facilities, large research institutions, and major corporations. 

They cost hundreds of thousands or even millions of dollars, and an entire university or department would often share a single machine. Early computers, such as the IBM 7090 or the CDC 6600, filled entire rooms, required specialized staff to operate, and were often isolated from one another physically and functionally.

Access to a computer was precious and carefully scheduled. Users would write programs (often on punch cards), submit them to an operator, and wait hours or even days for results. Real-time interaction was virtually nonexistent. Moreover, the software environment was highly localized: each machine had its own operating system, and data formats were often proprietary. 

As computers became more capable and universities, laboratories, and military installations invested in them, a problem began to emerge — fragmentation and isolation. Institutions could not easily share data, collaborate on software development, or coordinate research efforts. Every organization was essentially an island.

At the same time, by the early 1960s, the concept of time sharing started to take hold. Time sharing allowed multiple users to interact with a computer at once by quickly switching between different tasks, opening the door to a more dynamic, interactive mode of computing. 

This suggested a future in which computers could support communities of users, not just isolated programmers, and hinted at the greater potential for networking.

There was also a growing strategic need to better use expensive computational resources, particularly within the United States government and military research agencies like ARPA, the Defense Department’s Advanced Research Projects Agency.

Different research centers often had complementary strengths: one might have better software tools, another more powerful hardware, another specialized expertise. 

Yet moving data or programs physically via magnetic tapes or printouts between sites was slow, inefficient, and unreliable. Remote access was clearly desirable; researchers needed a way to share resources, exchange ideas, and collaborate without being physically present at the same site.

Around this time, there was another problem that was being considered. 

In the 1960s, communications, like the telephone system, were largely based on circuit switching. In circuit switching, when two parties communicate, a dedicated physical circuit is opened between them for the entire conversation. 

If that line were cut or damaged,  say, by a nuclear bomb, communication would immediately fail. Moreover, while the line was open, it could not be used by anyone else, even if neither party was speaking at that moment, which was an inefficient use of resources.

Paul Baran, who worked at the RAND Corporation in the United States, and Donald Davies in the United Kingdom, began asking: What if communication could be made more decentralized and dynamic? 

In 1964, Baran proposed a radical alternative: instead of fixed lines, messages could be broken into small pieces, each piece could travel independently through whatever routes were available at the time, and the receiving system could reassemble them.

Each “packet” of data would carry not just its contents, but also a destination address.

This approach had several theoretical advantages over circuit switching.

The key advantages of this approach were:

First was Resilience. If one path or node were destroyed, packets could be automatically routed through other working paths. The system didn’t require a single, fragile central hub.

Second was efficiency. Since packets could share network paths with packets from many other users, bandwidth could be dynamically used instead of sitting idle like a reserved phone line.

Third was scalability. More users could be added without having to lay out an entirely new set of dedicated lines for each connection.

Finally, it was cheaper. Sharing a common infrastructure lowered the overall cost compared to maintaining many individual, dedicated circuits.

Baran’s idea envisioned a network with no single point of failure, which was extremely attractive to military planners worried about the survivability of command-and-control systems during a nuclear conflict.

Meanwhile, Donald Davies, who coined the term packet, was motivated by the efficient use of expensive computer systems in civilian settings, such as time-sharing large mainframes among many users.

Here, I should also mention the work of the visionary I mentioned earlier: Joseph Carl Robnett Licklider. 

Licklider joined ARPRA in 1962 and wrote a paper on something he called the Galactic Network. 

It was a visionary concept, imagining a globally interconnected set of computers through which anyone, anywhere, could quickly access data and programs from any site. Licklider envisioned a network that would allow widespread information sharing, collaboration among distant researchers, and even real-time communication—in essence, an early sketch of what would later become the Internet.

The network ideas of Licklider and the packet-switching ideas of Paul Baran were combined in an actual proposal known as ARPAnet. 

When ARPANET first launched in 1969, its initial networking protocol was called the Network Control Protocol or NCP, which had an early form of packet switching.

To handle the packet switching, computers called Interface Message Processors had to be installed in each location. These processors were about the size of a large refrigerator and were equivalent to a router you might have in your home. 

The first Interface Message Processor was installed at UCLA in September 1969. 

The second was installed at the Stanford Research Institute in October.

Once you had two nodes, you had a network. 

On October 29, 1969, the first message was sent from UCLA to SRI, attempting to log in by typing “LOGIN.” However, the system crashed after receiving only the first two letters, “LO.” This humble beginning marked the first transmission on what would become the Internet.

By December, UC Santa Barbara and the University of Utah were connected to the network.

The Network Control Protocol, which ran ARPAnet, soon began to run into limitations in the early 1970s. 

The primary weakness of the Network Control Protocol was that it was designed for communication within a single, relatively controlled network and could not handle the complexities of connecting multiple independent networks. 

NCP assumed that the underlying packet-switched network would reliably deliver data, so it did not include provisions for dealing with packet loss, retransmissions, or routing failures, which were many of the original reasons for having packet switching in the original proposal.

Furthermore, NCP lacked a standardized system for addressing hosts across different networks, making it impossible to interconnect emerging packet networks, such as satellite, wireless, and local area networks. 

As the number and diversity of computer networks grew in the 1970s, it became clear that NCP’s limitations in flexibility, scalability, and fault tolerance made it inadequate for the future needs of networking.

As an aside, this proliferation of networks began being called the internetwork, or the network of networks. This eventually was shortened to just “internet.”

In response, Robert Kahn, then at DARPA, proposed a new idea for open-architecture networking in 1973. Instead of treating the network as a single, homogeneous entity, the new system would treat each network as a “black box” with its own internal methods. This would ensure that packets could travel across different networks without modification. 

Kahn partnered with Vinton Cerf, who was working at Stanford University at the time, and together, they began designing a new protocol to overcome NCP’s weaknesses. 

Their new protocol was called Transmission Control Protocol or TCP. 

However, the protocol still had weaknesses. During development, it became clear that splitting TCP into two distinct layers would provide greater flexibility. 

This modular design allowed different kinds of applications—some needing reliability, like file transfers, and others prioritizing speed, like real-time voice communication—to make different demands on the network without burdening the entire system.

The first protocol is known as Internet Protocol, or IP. 

IP is responsible for addressing and routing. It ensures that packets of data know how to travel across networks to reach the correct destination. Every device connected to a network using IP has an IP address. When data is sent, IP breaks it into smaller units called packets, each labeled with the source and destination IP addresses. 

IP then forwards these packets across various interconnected networks, using routers to direct them toward their destination. 

However, IP itself is unreliable. It doesn’t guarantee that packets arrive in order, arrive at all, or arrive uncorrupted. Its job is simply to move packets from point A to point B as best it can, even if the path changes mid-journey.

The second protocol, TCP, ensures reliable communication between two computers. It operates at a higher layer than IP and solves the problems that IP leaves open. TCP establishes a connection between sender and receiver before data transmission starts, a process called a handshake, and it ensures that packets arrive in the correct order and without errors. 

If a packet is lost, duplicated, or arrives out of sequence, TCP detects this and retransmits packets or reorders them as necessary. TCP also manages flow control to avoid overwhelming a slow receiver with too much data at once and congestion control to adjust the sending rate if the network becomes too busy.

Together, these protocols became known as TCP/IP. Although the protocols have been updated since then, they still form the basis for the entire Internet today.

The split in the protocols took place in 1978, and it was immediately obvious that TCP/IP was the future and was going to replace NCP.

After years of testing and refinement, the official switch from NCP to TCP/IP on ARPANET was scheduled for January 1, 1983,  known as the flag day, when all nodes had to transition to the new standard. This event is often considered the real birth of the modern internet.

TCP/IP, of course, is just the base layer of the Internet. The average internet user has no clue what is happening at the lowest levels of the network. 

They are familiar with many of the things that have been built on top.

Email became a popular application in the 1970s, but there was no standardized method for sending emails. SMTP, or Simple Mail Transfer Protocol, was developed in the early 1980s to create a standardized way for computers to send email across networks. 

FTP or File Transfer Protocol was one of the earliest application protocols designed for ARPANET, with its first version specified in 1971.  FTP was created to allow users to reliably transfer files between remote computers over the network, addressing the need for researchers to share software, documents, and datasets. It ran over NCP initially, and later adapted to TCP/IP after the 1983 transition.

Usenet was created in 1979 by two graduate students, Tom Truscott and Jim Ellis,, at Duke University. They wanted to build a decentralized system for sharing messages and discussions among computer users. 

Inspired by the idea of ARPANET but without access to its restricted network, they designed Usenet to work over simple dial-up telephone connections using the Unix-to-Unix Copy Protocol (UUCP). The system allowed users to post and read messages in organized categories called newsgroups, effectively creating one of the world’s first large-scale online communities. 

Of course, I haven’t even mentioned the World Wide Web and the Hypertext Transfer Protocol, or HTTP, which was the thing that really made the internet explode in popularity.

That story will be for a later episode. 

The computer scientists at UCLA and Stanford who made the first internet connection in 1969 could never have guessed that that simple act would be the start of a revolution that would change the world for the better and for worse. 


The Executive Producer of Everything Everywhere Daily is Charles Daniel. The Associate Producers are Austin Oetken and Cameron Kieffer.

Today’s review comes from listener SSWEnviron over on Apple Podcasts in the United States. They write.

An Excellent Daily Listen 

Gary writes great episodes on topics you never thought you wanted to know about until you hear him speak of them. You will look at the title of an episode, such as “The History of Salt,” and wonder why you would like to listen. Then you listen and you wonder no more. 

Fascinating topics will have you listening to multiple episodes when you may have only wanted to learn a single topic. This is my go-to for a long walk or run. 

Thanks, SSW! I’m glad you enjoyed the episode on salt. Perhaps in the future I can do one on pepper, and then maybe an episode on Salt and Pepper. On second though, I probably don’t want to Push It. 

Remember, if you leave a review or send me a boostagram, you, too, can have it read on the show.