The Chinese Room Thought Experiment
Imagine that you carry out the steps in a program for answering questions in a language you do not understand. I do not understand Chinese, so I imagine that I am locked in a room with a lot of boxes of Chinese symbols (the database), I get small bunches of Chinese symbols passed to me (questions in Chinese), and I look up in a rule book (the program) what I am supposed to do. I perform certain operations on the symbols in accordance with the rules (that is, I carry out the steps in the program) and give back small bunches of symbols (answers to the questions) to those outside the room. I am the computer implementing a program for answering questions in Chinese, but all the same I do not understand a word of Chinese. And this is the point: if I do not understand Chinese solely on the basis of implementing a computer program for understanding Chinese, then neither does any other digital computer solely on that basis, because no digital computer has anything I do not have.
This is such a simple and decisive argument that I am embarrassed to have to repeat it, but in the years since I first published it there must have been over a hundred published attacks on it, including some in Daniel Dennett’s Consciousness Explained....The Chinese Room Argument—as it has come to be called—has a simple three-step structure:
- Programs are entirely syntactical.
- Minds have a semantics.
- Syntax is not the same as, nor by itself sufficient for, semantics.
Therefore programs are not minds. Q.E.D.