**Subscribe ****Apple | Spotify** | **Amazon** | **Player.FM** | **TuneIn****Castbox** | **Podurama** | **Podcast Republic** | **RSS** | **Patreon**

## Transcript

One of the simplest mathematical statements possible is 2+2=4.

While the concept is very easy to understand, when you write it down you have to use mathematical symbols which are, historically speaking, a relatively recent invention.

At one point, mathematicians were doing reasonably complicated work without the benefit of symbols at all. Something which is unthinkable today.

Learn more about mathematical symbols, where they came from, and why they exist, on this episode of Everything Everywhere Daily.

—————–

This episode is sponsored by NordVPN.

For those of you who don’t know, VPN stands for Virtual Private Network.

It allows you to surf the web through an encrypted connection through another computer….and that computer can be anywhere.

If you don’t use a VPN, you need to do so for a host of reasons.

- It can protect you if you are using a public wifi connection.
- It can help you get around firewalls in countries that block internet traffic.
- It can allow you to access streaming content from other countries.

NordVPN has over 5500 servers in 59 countries, so you can safely and securely surf from anywhere in the world.

To secure your internet go to NordVPN.com/every

Once again, that’s NordVPN.com/every

—————–

As I mentioned in the intro, there was a time when mathematic was done without symbols. If you can imagine doing your elementary school math problems without the use of plus, minus, or equal symbols, you can realize how hard this would be.

In fact, it would be really difficult to do right now without the use of symbols.

The first people we know of who used mathematics were the ancient Babylonians and Sumerians. With their cuniform system of writing, they were able to do reasonably complicated mathematics.

Their numeral system was base-60, as opposed to ours which is base-10. The theory holds that two earlier people merged to become the Sumerians and one group had a system that was base-12 and the other had a system that was base-5. They resolved the difference by using 60, which was just 5 x 12.

They were able to solve quadratic equations, knew about square and cube roots, and had solved the Pythagorean theorem well before Pythagoras.

However, they lacked a few things. They didn’t have a zero, which is something I talked about in my episode about zero, and didn’t really have any symbolic expressions to do equations.

It wouldn’t look like algebra as we are familiar with it.

The Egyptians, Greeks, Romans, and Arabs all managed to do mathematics at some level without the use of mathematical symbols.

Algebra was actually named by Arab scholars and it literally comes from *“al-jabr” *which means a reunion of broken parts.

Arab scholars probably took mathematics as far as anyone in history up until that time, but they still mostly weren’t using symbolic notation.

The last great classic Arab mathematician from the early 15th century Ab? al-Hasan ibn Al? al-Qalas?d?, used symbols, but they were just letters from the Arabic alphabet.

The symbols we know and use today weren’t created until the 15th century.

The first use of the plus sign was in 1489 by German mathematician Johannes Widmann.

The plus sign just represents the letter “t” which was a short form of the Latin word *“et”* which means *“and”.*

Likewise, Widmann was the first person to use the minus sign as well. The minus sign is believed to come from a tilde which was sometimes placed over a number to represent subtraction.

In his treatise, he explicitly defined the new terms which he created. He said *“was ? ist, das ist minus, und das + ist das mer”. * Mer being German for more.

There were other previous attempts to create symbols that did the same thing, but they didn’t catch on. The Egyptians had a symbol that could be used for addition and the mirror image of it could be used for subtraction, but it never went beyond Egypt.

Not long after, in the early 17th century, the multiplication symbol was created. This is of course just the letter “x”.

The first use of “x” to denote multiplication was in 1618 by Scottish mathematician John Napier in his book *Mirifici Logarithmorum Canonis Descriptio.*

He too explained the use of this new symbol in the book by saying, *“Multiplication of species [i.e. unknowns] connects both proposed magnitudes with the symbol ‘in’ or ×: or ordinarily without the symbol if the magnitudes be denoted with one letter.”*

Technically, in printing, the multiplication symbol isn’t actually the letter x. It is a slightly smaller character of the same shape which is raised up.

There can be confusion when using a regular keyboard with “x” as a multiplication symbol and using “x” as a variable. Gottfried Leibniz, one of the co-inventors of calculus, disliked using an “x” for this reason.

For that reason, a dot is also sometimes used as a multiplication symbol. This is usually more popular in Europe, and it too can be confusing because a dot is used for a special type of vector multiplication.

With the advent of computers, the asterisk * has been adopted for multiplication simply because it is in the ASCII character set.

Just as with multiplication, there are several symbols for division as well.

The earliest of the modern division symbols which we use is called the obelus. This is the straight line with a dot above and below it.

This was first used in 1659 by Swiss mathematician Johann Rahn.

Of all of the symbols I’ve mentioned, this one has been deprecated by modern mathematicians. In fact, you can’t really find in use very much at all outside of elementary school math courses and the division keys on some calculators.

Personally, I hate the obelus. I found it really confusing and I don’t think kids should be taught division using it because they will never see it again in their lives.

The preferred division symbol is called the solidus or the forward slash. This is very similar to, and conveys the same meaning, as the horizontal line used in fractions. This was a much later creation and wasn’t to represent division until 1845.

The adoption of computers only strengthened the use of the solidus over the obelus as the obelus isn’t on most keyboards.

The equal symbol has a very interesting origin story.

The equal symbol was first used in 1557 by Welsh mathematician Robert Recorde in his book *The Whetstone of Witte*.

In his book, he was writing equations, and over 200 times he had to write the phrase “is equal to”. He basically got sick of writing it over and over, so he eventually created a symbol so he didn’t have to write it anymore.

He said in his book,* “And to avoid the tedious repetition of these words: “is equal to” I will set as I do often in work use, a pair of parallels, or duplicate lines of one [the same] length, thus: =, because no 2 things can be more equal.”*

There is a similar, lesser-used symbol, with three parallel lines simply called the triple bar. It was first used in 1801 by Carl Friedrich Gauss and it is sometimes used in logic, or in modular arithmetic.

The percent sign comes from the Italian phrase per cento. It was abbreviated as a “p” with two zeros, and eventually, the “p” was removed and it was just a slanted line with two zeros.

The square root symbol might have come from an Arabic letter that was used by the above-mentioned al-Qalas?d?, or possibly from a lower case letter “r”.

The first use in 1525 just looked like a checkmark. The horizontal line on top is called a vinculum, and it was added to the checkmark symbol in 1637 by Rene Descartes to create the modern symbol we use today.

The greater than and less than symbols were created in 1631 by Englishman Thomas Harriot in his book *The Analytical Arts Applied to Solving Algebraic Equations.*

The infinity symbol, that being the number 8 on its side, is even older than the modern number 8, which is a Hindu-Arabic number. The earliest evidence for it goes back to the cross of St. Boniface in the 7th or 8th century.

The first use of it to signify infinity wasn’t until 1655. English clergyman John Wallis used it in his book *De sectionibus conicis.* There was no explanation given as to why it was selected, but one hypothesis is that it is a variant of the symbol used for the Roman Number 1000, which was the letter C, followed by an I, and then with a backward C.

The last symbol I’ll go over is that of pi.

Pi of course is just the greek letter pi. However, its use to represent the ratio of the circumference of a circle to its diameter is actually relatively recent.

The knowledge that the ratio of the circumference to the diameter of a circle was special goes back to ancient China and Egypt.

What we refer to as the number pi began the use of the Greek letter delta and pi. Pi was chosen because it was the first letter of the word “perimeter”, and delta was the first letter of “diameter”. Englishman William Oughtred first used pi over delta in 1647.

The first use of the letter pi all by itself to represent the ratio was in 1706 by the Welsh mathematician William Jones.

There is a lot to be said about pi, but I’ll save that for a later episode, probably for next year’s Pi day.

You might have noticed that most of these symbols, especially the main ones, all began being used over the course of a 100 years, starting in the late 15th century. Basically, once people started to use symbols, it made mathematics easier, and then more people began to adopt them as a shorthand for more things.

Mathematical symbols are still being created today as new branches of mathematics create new ideas which need to be easily expressed.

If you think about it, math symbols really aren’t that much different than emojis. It is just a way to convey a complex thought into a single character.