The Mathematician, Philosopher, and Number Religion leader Pythagoras
How AI Research Revived Pythagoreanism and Confused Science with Philosophy
Recently, I wrote an article about how deep learning might be hitting its limitations and posed the possibility of another AI winter. I closed that article with a question about whether AI’s limitations are defined just as much by philosophy as it is science. This article is a continuation of that topic.
A Brief History of Pythagoras
2500 or so years ago, there was a philosopher and mathematician in South Italy named Pythagoras. You may have heard of him, but the story behind the man who studied triangles and math theorems is much wilder than you probably think.
Pythagoras ran a number-worshipping cult, and his followers were called mathematikoi. Pythagoras told his followers to pray to numbers, particularly sacred ones like 1, 7, 8, and 10. After all, “1” is the building block of the entire universe. For some reason, the number “10” (called the Tetractys) was the most holy. It was so holy, in fact, they made sacrifices to it every time a theorem was discovered. “Bless us, divine number!” they prayed to the number 10. “Thou who generated gods and men!”
According to Pythagoras, the universe cannot exist without numbers, and therefore numbers hold the meaning of life and existence. More specifically, the idea rational numbers built the universe was sacred and unquestionable. Apart from enabling volume, space, and everything physical, rational numbers also enabled art and beauty especially in music. So fervent was this sacred belief, legend says Pythagoras drowned a man for proving irrational numbers existed.
Are Our Thoughts Really Dot Products?
Fast forward to today. It may not be obvious to most people, but “artificial intelligence” is nothing more than some math formulas cleverly put together. Many researchers hope to use such formulas to replicate human intelligence on a machine. Now you may defend this idea and say “Cannot a math formula define intelligence, thoughts, and emotions?” Aha, gotcha. See what you just did there? No fava beans for you.
Notice how even though we have little idea how the brain works, even the most educated people (scientists, journalists, etc) are quick to suggest an idea without evidence. Perhaps you find mathematics so convincing as a way to explain the world’s phenomena, you are almost certain emotions and intelligence can be modeled mathematically too. Is this not the natural human tendency to react to the unknown with a philosophy or worldview? Perhaps this is the very nature of hypotheses and theories.
But again, you don’t know if this is true.
Is every thought and feeling we have really a bunch of numbers being multiplied and added in linear algebra fashion? Are our brains, in fact, simply a neural network doing dot products all day? Reducing our consciousness to a matrix of numbers is certainly Pythagorean. If everything is numbers, then so are our thoughts and feelings. Perhaps this is why so many scientists believe general artificial intelligence is possible, as being human is no different than being a computer. It may also be why people are quick to anthropomorphize chess algorithms.
21st Century Pythagoreanism
For this reason I believe Pythagoreanism is alive and well, and the sensationalism of AI research is rooted in it. You might say “Well I get Pythagorean philosophy says that ‘everything is numbers’ and by definition that includes our thoughts. And sure, maybe AI research unknowingly clings to this philosophy. But what about number worship? Are you really going to suggest that happens today?”
Hold my beer.
In Silicon Valley, a former Google/Uber executive started an AI-worshipping church called Way of the Future. According to documents filed with the IRS, the religious nonprofit states its mission is “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.” You might justifiably say this community exists on the extremes of society, but we cannot dismiss the high profile people and companies involved and how the church seeks to entrench itself into the scientific community. Here are some excerpts from their mission statements:
Way of the Future (WOTF) is about creating a peaceful and respectful transition of who is in charge of the planet from people to people + “machines”. Given that technology will “relatively soon” be able to surpass human abilities, we want to help educate people about this exciting future and prepare a smooth transition. Help us spread the word that progress shouldn’t be feared (or even worse locked up/caged).
Alright, never mind the fact sensationalism about near-term AI capabilities was alive and kicking in the 1960’s. But let’s keep reading:
We believe that intelligence is not rooted in biology. While biology has evolved one type of intelligence, there is nothing inherently specific about biology that causes intelligence. Eventually, we will be able to recreate it without using biology and its limitations. From there we will be able to scale it to beyond what we can do using (our) biological limits (such as computing frequency, slowness and accuracy of data copy and communication, etc).
Okay, for all this talk about science and objectivity… there is so much Pythagorean philosophy filling in the gaps. A belief that intelligence is not biological but rather mathematical (because that is what AI is) is hardly proven and yet labels itself as “science”, just like Pythagoras claimed his beliefs were “science”. And how can the claim that “intelligence is not rooted in biology” stand up to the fact intelligence has only existed in biology?
Are our brains, in fact, simply a neural network doing dot products all day? Reducing our consciousness to a matrix of numbers is certainly Pythagorean.
Regardless, let’s just assume this group is not reflective of the general AI community (How many of you are going to church to worship an AI overlord anyway?) There are still a lot of journalists, researchers, and a general public who may not share these sentiments in a religious sense, but they are still influenced by them. Many people worry robots will take their blue and white collar jobs, or worse create a SkyNet-like takeover of society. Other folks worry we will become cyborgs in a figurative or literal sense and AI will dehumanize humanity.
Science fiction movies definitely have not helped imaginations stay tempered within reality. But still, Silicon Valley researchers insist this can happen in the near future and continue to promote exaggerated claims about AI capabilities. They could simply be doing this to attract media attention and VC funding, but I think many sincerely believe it. Why?
How can the claim that “intelligence is not rooted in biology” stand up to the fact intelligence has only existed in biology?
This sensationalism, fear, and even worship of artificial intelligence is 21st century Pythagoreanism. It is completely based on a theory that intelligence, thoughts, and emotions are nothing more than matrices, dot products, and nonlinear functions. If this theory indeed holds up to be true, then of course a neural network could replicate human intelligence. But is human intelligence really that simple to model? Or should we acknowledge human intelligence is not understood enough to make this possible?
AI deities and robot uprisings aside, I’m not saying AI research is powerless or harmless. There are some useful applications and real concerns about bots being used to manipulate society. But that’s another discussion.
Pythagoras Says Everything is Numbers. So What?
So, everything is numbers in the domain of artificial intelligence and in Pythagorean philosophy. Why does this matter?
I am not saying Pythagoreanism is wrong, but rather much of the scientific community fails to acknowledge they are driven just as much by philosophy as they are science. One must be careful when they claim “science” without acknowledging their own worldviews, because everyone lives by a philosophy whether they realize it or not. Philosophy forces us to reason about our existence, how we react to the unknown, and acknowledge our own biases.
What if biology will always be intelligently superior and it is technology that is limited?
To presume how human intelligence works quickly crosses the line between science and philosophy. Failing to make this distinction between philosophy and science is going to hurt the reputation of the scientific community. Before millions of dollars are invested and sunk into an AI startup, it might be a good idea to vet out which claims about AI capability are merely philosophical. Time and time again, ambitious AI research has a poor track record when it comes to credibility and delivering what they say is possible. I think a lack a philosophical discourse and disclosure is largely responsible for this.
What If Everything Isn’t Numbers?
What if consciousness, intelligence, and emotions are not numbers and math functions? What if the human (and even animal) mind is infinitely more complex in ways we cannot model?
If we do not have rigorously philosophical discussions to these questions, we are kidding ourselves when we m ake assertions about the unknown and call it science. And we should not entertain and accept just one philosophy. We should be able to discuss them all.
Failing to make this distinction between philosophical and scientific claims is going to hurt the reputation of the scientific community.
If you do not buy into this philosophy of 21st century Pythagoreanism, then the best you can strive for is have AI “simulate” actions that give the illusion it has sentiments and thoughts. A translation program does not understand Chinese. It “simulates” the illusion of understanding Chinese by finding probabilistic patterns. The algorithm does not understand what it is doing, know it is doing it, or much less why it is doing it. In a non-Pythagorean worldview, AI is like a magician passing off tricks as magic. It’s all an illusion.
What if the AI church is all wrong? And biology will always be intelligently superior and it is technology that is limited? I am not a Luddite saying we shouldn’t try to make “smarter” machines. But we need to set out to achieve small, reasonable goals focused on diverse and specific sets of problems… with equally diverse and specific solutions. In other words, let’s accept it’s okay to create algorithms that are great at one task, rather than spin our wheels creating an algorithm that does everything (“jack of all trades, master of none” and all that). If it is not to prevent AI winters, sunk investments, and wasted careers, let’s do it for our own mental health and sanity.