Open In App

Chinese Room Argument in Artificial Intelligence

Last Updated : 09 Mar, 2023
Like Article

Introduction :

The Chinese Room Argument is a philosophical thought experiment that challenges the idea that artificial intelligence can truly understand language and have genuine intelligence. The argument was proposed by philosopher John Searle in 1980 and is named after a room in which a person who doesn’t understand Chinese is able to answer questions in Chinese by following a set of instructions.

The argument goes like this: imagine a person who doesn’t understand Chinese is placed in a room with a set of instructions in English for manipulating Chinese symbols. The person receives questions in Chinese through a slot in the door and uses the instructions to produce a response in Chinese, which is then passed back through the slot. From the outside, it appears as though the person understands Chinese and is able to answer questions, but in reality, the person is just following a set of rules without actually understanding the meaning of the symbols.

Searle argues that this thought experiment demonstrates that a computer program that simulates human understanding of language, such as a chatbot, does not truly understand the meaning of the language it is processing. The program is just following a set of rules without actually understanding the meaning of the language.

The Chinese Room Argument has been controversial in the field of artificial intelligence, with some arguing that it is flawed and others using it to challenge the concept of machine intelligence. It highlights the ongoing debate about the nature of intelligence and whether machines can truly replicate human thought and understanding.

When we ask, ‘Is artificial intelligence (AI) possible?’ We really ask ‘Can we create consciousness in computers’

The Chinese room argument holds that a program cannot give a computer a “mind”, “understanding” or “consciousness”” regardless of how intelligently or human-like the program may make the computer behave/ [Source Wiki

In 1980, John Searle argued that Turing Test could not be used to determine “whether or not a machine is considered as intelligent like humans”. He argued that any machine like ELIZA and PARRY could easily pass Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as “thinking” in the same sense people do. 


John imagines himself (instead of machine) as non-Chinese person sitting inside the room isolated from another Chinese person who is outside the room tries to communicate. He is provided a list of Chinese characters and an instruction book explaining in detail the rules according to which strings (sequences) of characters may be formed, but without giving the meaning of the characters. That means he has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. 


In this thought experiment, a person in the “Chinese room” is passed questions from outside the room, and consults a library of books to formulate an answer
Now he receive all the messages posted through a slot in the door written in Chinese language. He will process all the symbols according to program instructions and produces the chinese characters as output like: 

  • If he finds Chinese symbol like ?, he returns symbol ? 
  • If he finds Chinese symbol like ??, he returns symbol ?? 

Actually instruction book contains so many rules that contains input symbols and their respective output symbol. He just need to locate the input Chinese symbol and return the corresponding Chinese symbol as a output. 
Now, the argument goes on, a computer(machine), is just like this man, in that it does nothing more than follow the rules given in an instruction book (the program). It does not understand the meaning of the questions given to it nor its own answers, and thus cannot be said to be thinking. The fact is that inside person has no understanding of Chinese language but still he manage to communicate with outside person in Chinese language perfectly. 

Compare the John with machine in Turing Test, the machine may have huge collection of database containing questions and answers. When a interrogator ask the question, the machine is simply locating the question in the database and returning the corresponding answer to the interrogator. The whole scenario would seems like that human is returning the answer unlike machine. 

Hence the machine in configuration has no understanding of those questions and answers, without “understanding” (or “intentionality“), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in anything like the normal sense of the word. Therefore we can’t consider machine as intelligent. 



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads