I recently wrote a post, “Has A.I. Really Arrived?” in which I disputed claims that computers can or will think (click here to read the original post). In that essay I wasn’t articulating anything original. I was repeating the arguments of the philosopher John Searle.
Searle calls the idea that programs are to computers as minds are to brains “Strong A.I.,” and he argues—to my mind, successfully—that computers aren’t the sorts of things that can think. In brief, he argues that computers have syntax, i.e., they follow rules; but they don’t have semantics: In the information processing they do, there is no meaning or understanding, and these latter are essential to thought.
In this post, I want to address the issue of why we would ever believe that computers could think in the first place.
Thinking is a Natural Process
Let’s note that no one is hot to claim that machines generally, or computers specifically, can digest an apple, pass gas, have a bowel movement, or take a leak. But many people in various disciplines are falling over themselves to try to prove that computers can and do, or will, think. But why is that?
One reason is that computers do something that looks like thinking, namely information processing. That is, they take in data as an input, process it according to their programs, and then provide an output. There’s a strong temptation to look at the way our minds work in the same way: our senses provide the data that the mind processes, and then the mind spurs some sort of an action as a behavioral output. (E.g., I hear you ask me to pass the butter. My mind processes that information—I understand the sentence. And then I react: I pass the butter to you.)
But we shouldn’t make too much of this comparison. First, as I noted above, Searle well argues that the kind of information processing that computers do involves following rules (it’s syntactical) but involves no understanding of the content of what’s being processed (there’s no semantics or meaning). Whereas our thought—in using language, for example—essentially involves meaning.
Second, as Searle also notes in a particularly apt phrase, “Simulation by itself never constitutes duplication.” That is, computers can simulate certain mental activities, but that by itself never amounts to duplication, actually thinking. As he notes in one of his examples, my computer can also simulate a rainstorm, but I don’t get wet from it.
But the reasons why we have a strong urge to believe that computers can think are much deeper, older, and more complex than this.
Before I get to that, though, consider this: thinking is a biological activity—something that living creatures do—no different in that regard from eating, digesting, farting, peeing, or having bowel movements. No different. It’s a completely natural process. (And it’s not specifically human, of course. Cats, dogs, monkeys, chickens, dolphins, etc., all do it.)
There are two tendencies that human beings have been prone to since the time of the ancients, which indicates that those proclivities are quite deep-seated, if not hardwired.
The first is anthropomorphization: We have a strong tendency to treat non-human entities as if they had human characteristics. This is evident in the way that people tend to attribute thoughts and feelings to their pets that the pets aren’t capable of having. We also do it in offhand ways when we say things like, “the plant is thirsty,” when we just means it needs to be watered; or “my computer hates me,” when it’s not functioning well.
This tendency seems to be a particular version of our need to make the unfamiliar and frightening into something familiar, understandable and thus less scary. The ancients clearly did this in personifying natural phenomena, or seeing them as the work of the fickle gods.
Our desire to endow our computers with the human capacity for thought (and to make Siri speak to us in a human voice, e.g.,) is another example of this tendency.
The second, and perhaps more important, tendency in our history is our strong propensity to think of ourselves as different from the rest of nature, somehow more noble, somehow elevated above nature.
Plato was one of the first to formalize this as philosophical doctrine. He has Socrates argue in the Phaedo that the body is a prison for the soul or mind, which will live on and perhaps be reincarnated after we die. Christianity, in part influenced by Plato, promulgated the idea of personal immortality. Again, the soul is something distinct from our material bodies, and after the body dies and decays, the soul will live on somehow.
One reason for all this is our fear of death, of course, but another important reason is that the part of us that distinguishes us from other kinds of animals—our gigantic brains and our ability for complex and abstract thought— is the part of us that we have the most difficult time understanding as natural. Consequently, we think of it as non-physical, non-material, something otherworldly that must be separate from our physical bodies. (Click here to read my post regarding the difficulty of the problem of consciousness.)
A great example of this is Descartes and his metaphysical dualism (and we shouldn’t forget that Descartes was a Christian). The body is an extended thing, says Descartes, something material existing in space; and the mind is a completely different substance. It’s non-physical, aspatial, etc., such that it’s not subject to the same alterations and corruption as the body.
It took revolutionary philosophers like Hume, and then following him Nietzsche, to convince us of something we tried so desperately to deny: we’re animals, and as such we’re a part of nature. Both Hume and Nietzsche (as well as a number of other great thinkers) devoted themselves to the project of re-integrating humanity into nature, to shedding the antiquated idea of ourselves as somehow different, and to understanding ourselves in completely naturalistic terms.
The project of Strong AI is one of the last vestiges of metaphysical dualism; it’s akin to the religious belief that thinking is somehow non-natural, somehow not connected to our animal natures, something distinct from the functioning of our bodies, such that something else that functioned completely differently—something inorganic [!], in fact—could think.
The next time someone says computers can think, ask him or her if they can also pee and poop and digest food. When he scoffs, tell him that thinking, like all those things, is a naturally functioning process of organic bodies.