Open Source Software

Subscribe
Apple | Spotify | Amazon | iHeart Radio | Castbox | Podcast Republic | RSS | Patreon


Podcast Transcript

Computer software seems to be everywhere. No matter what kind of computer you use or where you use it, all computers use software. 


That is the entire point of a computer. 

However, not all software is the same. There are actually enormous differences between software applications. Not just what they do, but how they were written, the business models that run them, the legal licenses that cover them, and the philosophy behind them. 

Learn more about free and open source software, what it is, and how it works on this episode of Everything Everywhere Daily.


Software is ubiquitous in the modern world. It isn’t just in our smartphones and desktop computers, it is in our televisions, refrigerators, and washing machines. 

Some people have become billionaires from the creation of software. Around the world, there are probably hundreds of thousands, if not millions, of people who make their living from writing computer software.

In fact, I’m confident that some of you listening to me right now are involved in the development of computer software. 

As important as software is today, it wasn’t always considered so important. 

The first programmable electronic computer is considered to be ENIAC, the Electronic Numerical Integrator and Computer, which was built in 1945.  

ENIAC initially did not have stored programs. You didn’t load code into memory like modern computers. Programming was done by physically rewiring cables, setting switches, and configuring plugboards.

A single program could take days or weeks to physically set up. If you wanted the computer to do something else, you had to do it all over again. 

So, software with respect to ENIAC was just a set of instructions for which cables to set up and which switches to flip. 

At the time, no one was even considering that this was something that could be copyrighted or owned. It was the equivalent of a cooking recipe more than anything else. 

Soon after ENIAC, computers had the ability to store instructions in memory. 

This episode isn’t about the history of programming languages, so suffice it to say that compiled programming languages were developed in the 1950s. 

A compiled programming language is one where the source code is translated into machine code by a compiler before execution, allowing the program to run directly on hardware.

Machine code is the lowest-level programming language. It consists of the 1s and 0s that a computer can execute directly.

These early computers had two relevant attributes for the purposes of this episode: they were extremely large and expensive, and they weren’t very powerful, at least compared to the computers that would come later. 

That means from a business standpoint, what was being sold and what everyone had an interest in was the hardware. 

The size of the programs written for these computers was relatively small. For example, the first business computer sold by IBM in 1953, the IBM 650, had programs which were about 100 to 1,000 instructions.

So one instruction was 10 decimal digits, which was about 40 bits.

So, in terms of a size that you could compare with modern computers, a 100-instruction program would be about 4000 bits or 500 bytes. 

A 1000 instruction program would be 5000 bytes of 5 kilobytes. 

So, these programs weren’t very big. 

In these early days of computing, software was generally shared freely among researchers and developers. 

As computer hardware was the primary commercial product, software was often distributed with source code as a practical matter to anyone who purchased the computer. 

The concept of freely shared software began in academic and research institutions where collaboration was the norm. At places like MIT, Berkeley, and Bell Labs, programmers routinely shared code to solve problems and build upon each other’s work.

Given that computer software was totally useless to anyone who didn’t own a very expensive computer, which was limited to large institutions, no one was concerned about things like ownership or rights. 

The SHARE user group was formed in 1955 as one of the first computer user groups in history. It was established by a collection of IBM mainframe customers who were using IBM’s 704 scientific computing system.

SHARE’s name wasn’t an acronym, but rather reflected its core purpose: to share information, software, and resources among its members. At a time when computers were enormously expensive and software was not viewed as a separate commercial product, SHARE provided a formal structure for collaboration.

As a side note, the SHARE user group still exists today. 

This ethos of sharing software continued into the 1960s and 1970s. 

An important development occurred in 1973 with the release of UNIX.

UNIX is a multiuser, multitasking operating system which began development in 1969 at Bell Labs, which, if you remember from a previous episode, invented everything. 

UNIX was created as a simpler, more flexible alternative to the complex, resource-heavy systems of the time. Designed to be portable, efficient, and modular, it introduced key concepts like the hierarchical file system, pipes, and a shell-based command-line interface. 

However, AT&T was prohibited from entering the computer business by a 1956 consent decree. This led them to license UNIX to universities for minimal fees, including source code.

Academic institutions, particularly the University of California, Berkeley, received, studied, and modified the code. Computer science students learned programming by reading actual production code.

This created a generation of programmers who expected to be able to see and modify source code, establishing a culture that valued openness and knowledge sharing.

BSD or Berkeley Software Distribution, originated in the late 1970s at the University of California, Berkeley, as a series of enhancements to AT&T’s original UNIX. Led by Bill Joy and others, the project began by adding useful tools and features and eventually evolved into a full-fledged operating system. 

ATT began complaining about BSD infringing on its rights, which eventually led to a lawsuit in 1992, which Berkeley won with some minor concessions. 

The issues with ATT were just one of many changes that were happening to the world of software in the late 1970s and early 1980s. As computers became more ubiquitous and software was finding itself in more devices, more companies began to make their software proprietary, and the culture of the free sharing of software began to wane. 

In the wake of these changes to the culture of software, the GNU Project was launched in September 1983 by Richard Stallman, then a programmer at MIT’s Artificial Intelligence Laboratory. The project name is a recursive acronym for “GNU’s Not Unix” – a humorous acknowledgment that while GNU was designed to be Unix-compatible, it would contain no Unix code.

In 1985, Richard Stallman founded the Free Software Foundation to support and promote the development of free software—software that respects users’ freedom to use, study, modify, and share it. 

As proprietary software became more common in the 1980s, the FSF provided legal, philosophical, and organizational support for the free software movement, including the creation of the GNU General Public License or  GPL, a license that ensured software would remain free for all users. 

More on “free software” in a bit…

The GNU operating system was lacking one major component: a kernel. 

A kernel acts as a bridge between applications and the physical machine, ensuring that programs run efficiently and safely on the computer’s hardware. It handles essential tasks like memory management, process scheduling, device control, and system calls.

The kernel issue was addressed in 1991 by a Finnish computer science student named Linus Torvalds.

Torvalds released Linux under the GPL, allowing anyone to use, modify, and distribute it freely. Because the GNU Project had already developed many essential system utilities, but lacked a working kernel, Linux quickly became the missing piece to form a fully functional, free Unix-like operating system, called GNU/Linux…..althogh many people today just shorten it to Linux.

The 90s saw the rise of the internet, which dramatically improved the ability to share code and also created more demand for free software. 

Many of the software components that make up the backbone of the internet were developed during this time. 

Apache, the world’s most popular web server, PHP, a very popular web scripting language, and MySQL, a popular free database, were all developed in the 1990s. 

You might have noticed that this far into the episode, I have yet to mention the phrase which is the title of this episode, Open Source. 

In the late 1990s, the term “open source” was developed as a way to rebrand and reframe the free software movement in more pragmatic, business-friendly terms. 

While the Free Software Foundation emphasized software freedom as an ethical and political issue, some developers and advocates believed this messaging limited the broader adoption of free software, especially within the commercial world. 

In 1998, after Netscape released the source code for its browser, which became Mozilla, a group including Eric S. Raymond, Bruce Perens, and Christine Peterson coined the term “open source” to highlight the practical benefits of collaborative, transparent development—such as higher quality, faster innovation, and lower costs—without the ideological framing. 

This led to the creation of the Open Source Initiative to define and promote open source software through a more inclusive and commercially palatable lens. The movement rapidly gained momentum, drawing in major companies and reshaping the software industry.

Here, I should explain some differences between free and open source, because it can be confusing. 

If software is free, as in you don’t have to pay for it, it doesn’t mean that it is open source. Someone could create a program and allow people to download it without payment, but still retain the full rights to the code. 

The word “free” with respect to the Free Software Foundation refers to freedom, as in liberty. However, “free software” under this meaning is also free as in gratis. 

All of these types of free software are also open source software. 

However, not all open source software is free, as in freedom. Depending on the licenses, there might be restrictions placed on the code. 

There are multiple licenses available under which open source software can be published. Some of the most popular licenses include the previously mentioned GPL, the MIT License, the Apache License 2.0, the BSD License, the Mozilla Public License, and the Eclipse Public License.

Each license is different and provides different rights to the software users. 

What they all have in common is that they allow users to use the software freely and to view and edit the source code. However, it also requires any changes to the software be subject to the same license, meaning you can’t take open source software and then sell it as proprietary. 

Sometimes, there might be differences in the direction of an open source software project and a group might take the code and create a fork, which is just a way of saying they are going to take the project in another direction.

So, in a world with multibillion-dollar software companies, how popular is open source software?

The answer is extremely popular and you probably use it every day without even knowing it. 

Let’s start with the Linux/GNU operating system. Linux has never really caught on as a desktop operating system. Today, it has about a 4% market share of the desktop operating system market. 

However, Linux is the number one operating system for web servers. So, if you visit a website, there is a very good chance it is running Linux. Of the top 500 supercomputers in the world, 100% of them run on Linux.

The Linux kernel is also the core of the Android operating system for smartphones, which has a global 72% share of the market. 

Your web browser almost certainly has open source software. The Google Chrome browser is based on the open-source Chromium project. In addition to Chrome, the Microsoft Edge browser, Opera, and Brave all use Chromium. 

Apple’s Safari browser uses the WebKit engine, which is open source, and the entire Firefox browser is open source. 

The Apache web server is open source, and it is by far the most popular web server application.

Forty percent of all web pages on the internet are hosted on WordPress, which is open source software. 

One of the most popular websites in the world is Wikipedia, which is entirely open source. 

There are open source alternatives that exist for almost every type of proprietary program you can think of, including word processors, photo editing, and media players.

Whether or not you know it, open source software is absolutely pivotal to the working of the internet. Take it away, and everything would cease to function. 

This pillar of our online world stems from the early culture of programmers sharing their work with each other. 


The Executive Producer of Everything Everywhere Daily is Charles Daniel. The Associate Producers are Austin Oetken and Cameron Kieffer.

Today’s review comes from listener Skunk1010 over on Apple Podcasts in the United States. They write. 

Great variety 

This show does such a great job of bringing the listener into a wide range of topics, from science and math to history and sports. Gary does a very good job of making the subject matter interesting and digestible. His travels allow him to speak on topics far and wide, including my own “back yard” (the Cardiff Giant). 

Keep up the good work!  

Thanks, Skunk! I’m glad that you enjoy listening to them as much as I enjoy making them. 

Remember, if you leave a review or send me a boostagram, you, too, can have it read on the show.