A common misconception about the Chinese Room Argument

The “Chinese Room Argument” is one of the most famous bits of philosophy among computer scientists. Until recently, I thought the argument went something like this: Imagine a room containing a person with no Chinese language proficiency and a very sophisticated book of rules. Occasionally, someone slides a piece of paper with a sentence in Chinese written on it under the door. The room’s inhabitant (let’s call them Clerk) uses the rulebook to look up each Chinese character and pick out the appropriate characters to form a response, which they slide back under the door. Clerk has no idea what the characters or the sentences they form mean, but with a sufficiently sophisticated rulebook, it would look to outside observers like Clerk was conversant in written Chinese. The conclusion of the argument was that even if we were to create an AI system that could pass a Turing test in Chinese (or any other language), that wouldn’t be sufficient to conclude that it actually understands Chinese. Understanding, here, means something like conscious awareness of what the Chinese characters mean and what is being said with them.

It turns out that this conclusion is quite different from what Searle, who originally proposed the Chinese Room thought experiment, intended.1 Searle wasn’t trying to argue that consciousness was difficult or impossible to detect in machines, he was arguing that it is impossible for a digital computer to be conscious at all.2 To understand why, consider this version of the thought experiment: someone sends me a program that they claim passes the Turing test. I take the assembly code for this program, print it into a giant manual, and shut myself up in an unused basement in Cory Hall. When a piece of paper with some writing on it is slid under the door, I use the manual to pick the responses, just as Clerk did before. In this fashion, I essentially become the computer running the program. But just like Clerk, I’m not conscious of the meaning of the sentences I receive as input or the reasoning behind selecting one output over another. This means (according to Searle) that a regular computer running this code also couldn’t be conscious of these things, even if it does pass the Turing test. Therefore, no matter how sophisticated a program is, the computer running it won’t achieve consciousness. Searle sums up this viewpoint by saying: “Symbol shuffling… does not give any access to the meanings of the symbols.”

Since humans are conscious and do have access to meanings, Searle believed that there is something special about the brain over and above any digital computer. He is commonly quoted as saying “brains cause minds” (i.e. the “software” running on the brain doesn’t create a mind, at least not by itself – something about the physical brain itself is critical). This stronger conclusion is unsurprisingly not widely accepted among AI researchers, who generally believe that a digital computer (perhaps a very powerful one) running the right kind of software could achieve understanding and consciousness. 

Most philosophers also seem to object to the Chinese Room Argument. One criticism of Searle’s argument is so well-known that it gets its own name: the “systems response.” This argument accepts Searle’s assumption that Clerk wouldn’t understand Chinese simply by manipulating symbols, but notes that we can’t logically conclude from this that the system that includes both Clerk and the rulebook doesn’t understand Chinese. Searle appears to struggle to take this objection seriously – how could a rulebook understand anything?3

The systems response seems a little less absurd when we consider just how sophisticated the rulebook would have to be to pass a serious Turing test. Imagine a human judge asks a computer to explain a bad joke. The computer might respond that explaining jokes ruins them, but when pressed, give an explanation that explains the cultural context of the joke and why the punchline is amusing.4 A rulebook that could exhibit this kind of behavior would have to be unimaginably complex! The problem with the Chinese Room Argument is that it invites us to imagine manipulating symbols according to a (say) dictionary-size rulebook, then extrapolate our intuition about this scenario to the wondrously complex software that would be needed to exhibit a human-like mastery of language. If you seriously consider just how far this extrapolation needs to go, it’s reasonable to entertain serious doubts as to whether the simple dictionary-rulebook case tells us anything at all about a program that passes the Turing test.    While the Systems Response and other criticisms make it difficult to take Searle’s conclusion that brains must have a special “consciousness sauce” missing in digital computers too seriously, these criticisms also don’t establish the other extreme, namely that a Turing test-passing program really would understand language (or be conscious). Therefore, the conclusion I’m left with is quite similar to my original misunderstanding of the Chinese Room Argument: computers may or may not achieve consciousness someday, but knowing for sure whether a future computer thinks or understands may not be possible. Read the rest