The Chinese Room
As you, O Imaginary Reader, surely noted I recently read and enjoyed On Intelligence by Jeff Hawkins. I liked his conclusions and models of the brain and such. One thing I didn’t agree on was the author’s take on – what in the scope of the book is a rather minor detail – the Chinese Room thought experiment.
In short, the Chinese Room is a hypothetical dealing with the question of the nature of understanding. The premise is as follows:
A man sits in a room. He himself has no understanding of the Chinese language. With him he has a gargantuan rule book, an infinite scratch pad and a pen. He also has paper on which to write his output. The work consists of answering – in Chinese – questions about a text written in Chinese. The text and the questions are posted through a slot in the wall or something. Point is, whoever posts the input has no idea what occurs inside the room.
Upon receiving the Chinese text, the man gets to work. His rule book guides him; for every Chinese character he encounters, he follows the rules of the book. Sometimes it directs him to make note on his scratch pad, or jump to a different section of the rule book, depending on what is written on his scratch pad. Other times it tells him to erase something, and sometimes to add something to the output paper.
When the book tells him he’s done, he posts the output paper through the hole in the wall.
The person outside the room who posted the text and the questions reads the answers, which the rule book guarantees are correct. Asking this outside person if they think whoever inside the room understands Chinese, they would probably answer in the affirmative; the answers were correct, perhaps even insightful.
The question now is: has understanding occurred? The man inside the room never understood Chinese; he just followed rules and data. But to the person outside, it seems as if the answers were the products of an understanding entity. Depending on the rule book, they might even be witty or philosophical.
This is sometimes used as an argument against conventional AI; that a computer, following however many rules, is still just a machine (whatever that means) going through the motions. Hawkins kinda sorta took that stance too; that understanding hadn’t occurred, only the illusion thereof.
That position never made much sense to me. Sure, the man didn’t understand Chinese. Neither did the rule book. He was indeed just “going through the motions”, doing things he didn’t understand for reasons unclear to him. Following a rule book, unthinkingly.
But if understanding didn’t occur because the individual parts of the Room didn’t understand Chinese, then it follows that understanding never occurs in the human mind either. Because the brain could be seen as a complex cluster of molecules just “going through the motions”, following the rule book that is the laws of nature.