In a recent article in Wired magazine, “The Three Breakthroughs That Have Finally Unleashed AI on the World,” the author, Kevin Kelly, claims that artificial intelligence (AI) is here and here to stay. He understands AI, apparently, as machine (specifically, computer) intelligence that will become more and more a part of our lives. He claims this final step in computer evolution has been made possible by three developments: Cheap parallel computation, big data, and better algorithms. I won’t go into the details of Kelly’s article, but let me just note that Kelly believes that computers are or will be intelligent and conscious:
“As AIs develop, we might have to engineer ways to prevent consciousness in them—and our most premium AI services will likely be advertised as consciousness-free.”[1]
He goes on to claim, rather dramatically:
“But we haven’t just been redefining what we mean by AI—we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for…The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”

Computers are information processing systems: they take in information, process it, and provide an output; and as technology has developed, they’ve been able to do this faster and faster, accessing larger and larger data bases, while taking up less and less space. What’s more, one way of looking at and understanding the human mind is to see it similarly as an information processing system: we take in information in the form of, say, sense perception, process it, and provide an output. (I see a piece of pie, realize that I’m hungry and that I love pie, and consequently I grab a fork.)
So the big question, since the dawn of the computer age, has been: do computers and minds work the same way? Can computers think in the way that we do? Thinking is a conscious mental process, a matter of having subjective mental states. Consciousness is awareness of the environment and of one’s own thoughts, feelings, etc. (See my earlier post, “When Science Gets Stupid” for more discussion of consciousness itself.)
Can computers have these kinds of mental states?
Can Computers Think?
The answer to the question is no. Computers can’t think, they don’t have mental states, and they will never become conscious. I will draw upon (report, really) the work of the philosopher John Searle to provide the justification for this answer.
In his essay, “Can Computers Think?” in Minds, Brains, and Science, Searle identifies Kelly’s position as “Strong AI,” which is the idea that the mind is to the brain as the computer program is to the computer. Searle says:
“This view has the consequence that there is nothing essentially biological about the human mind. The brain just happens to be one of an indefinitely large number of different kinds of hardware computers that could sustain the programs which make up human intelligence. On this view, any physical system whatever that had the right program with the right inputs and outputs would have a mind in exactly the same sense that you and I have minds. So, for example, if you made a computer out of old beer cans and powered by windmills; if it had the right program, it would have to have a mind.” [2]
This is a mistake, argues Searle. Thinking is a biological process, something that living creatures like ourselves do, no different in that regard from digestion. That is, thinking arises out of the natural, organic process of brain functioning, so the biology and composition of the thing that thinks is of great importance: it’s what makes thinking possible at all.

As Searle puts it so aptly, “simulation by itself never constitutes duplication.” [3] In other words, computers—given what they are—can simulate certain features of human cognition; but that doesn’t mean that the computers are duplicating human thought. Or, to put it another way, that Deep Blue beat Garry Kasparov in chess doesn’t prove that Deep Blue has a mind; it only proves that you don’t need a mind to win at chess.
To demonstrate his point, Searle came up with his Chinese Room thought experiment. Imagine, he says, he’s in a room, and someone puts cards with Chinese characters on them through a slot. Searle doesn’t understand any Chinese at all (which is why he chose that particular language). His task then is to take that character and look up in a book of rules the proper character that corresponds to it. When he has that second character, he then feeds it back through the slot. So, in other words, the Chinese room works just like a computer: there is data input, a set of rules governing the processing of the data, and an informational output.
But here’s the kicker: Searle still has no understanding of the language or the characters whatsoever. Or, as he puts it, computers have syntax (they follow rules), but they have no semantics, which is meaning. But semantics, meaning, is essential to thought; that’s what thought essentially is. He says:
“The whole point of the parable of the Chinese room is to remind us of a fact that we knew all along. Understanding a language, or indeed, having mental states at all, involves more than just having a bunch of formal symbols. It involves having an interpretation, or a meaning attached to those symbols. And a digital computer, as defined, cannot have more than just formal symbols because the operation of the computer…is defined in terms of its ability to implement programs. And these programs are purely formally specifiable—that is, they have no semantic content.” [4]

(I’ll note in passing that in a later work, The Rediscovery of the Mind, Searle goes on to argue that computers don’t even have syntax: “The ascription of syntactical properties is always relative to an agent or observer who treats certain physical phenomena as syntactical.”[5] Or, in other words, because syntax isn’t a natural feature of things like mass is (for example), anything can be described as if it were following rules: water running downhill, or the pen simply laying on the table. Consequently, that something is following rules (having a syntax) can only be specified from a third-person point of view, from the position of an observer. Thus, syntax isn’t even inherent to computer operations.)
Thinking is a Biological Process
None of this is to say that computers aren’t powerful and life-transforming for humans; it’s not to say that they haven’t and won’t continue to dominate more and more of our lives. All that is true. No, Searle’s point is that it’s a mistake to call what computers do thinking, and to say that computers don’t and can‘t, and never will, have mental states. Again, having mental states is a perfectly natural, biological function of certain kinds of creatures like ourselves.
Curiously, in “Can Computers Think?” Searle argues that it might in principle be possible to create something artificial that could in fact think; but whatever that would be, it wouldn’t be a computer, since it would have to have the causal powers of the brain. I say this is curious, because it conflicts with his, to my mind, crucial point that thinking is a biological function like digestion. In the essay, though, he doesn’t address this conflict.
In conclusion, I agree that we’re constantly trying to figure out what it means to be human, but this is part of the human condition, and not, as Kelly would have it, because computers can think. We don’t in fact need A.I. to tell us who we are; and it couldn’t even if we wanted it to.
[1] http://www.wired.com/2014/10/future-of-artificial-intelligence/.
[2] Searle, Minds, Brains, and Science, 28.
[3] Searle, Minds, Brains, and Science, 37.
[4] Searle, Minds, Brains, and Science, 33.
[5] Searle, The Rediscovery of the Mind, 208.
2 thoughts on “Has A.I. Really Arrived?”