The Chinese Room Thought Experiment

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
playing these kinds of games requires no insight per se, there are rules and algorithms can be written. It's not a question of claiming that computers aren't going to be powerful enough or that the limitation is because we are complex (well, some people like to think they're complex!). What Sir Roger said was that no matter how powerful a digital computer / Turing machine becomes, it can not in principle, ever replicate 100% the human mind because at the end of the day all it can do is run algorithms - insanely fast maybe, but that won't cut the mustard.
 
playing these kinds of games requires no insight per se, there are rules and algorithms can be written. It's not a question of claiming that computers aren't going to be powerful enough or that the limitation is because we are complex (well, some people like to think they're complex!). What Sir Roger said was that no matter how powerful a digital computer / Turing machine becomes, it can not in principle, ever replicate 100% the human mind because at the end of the day all it can do is run algorithms - insanely fast maybe, but that won't cut the mustard.
What is is about the human mind that (as I understand your argument) cannot be replicated by algorithms?
 
What is is about the human mind that (as I understand your argument) cannot be replicated by algorithms?

It's not my argument, I'm not familiar enough with the details of what Sir Roger was saying to stand in his place here. But I do encourage those who are interested enough, to read his book. I remember it as an interesting book to read and has some other interesting nuggets you might find equally fascinating.

Anyhow, the 'argument' as I understand it goes like this

If the brain were a computer, its powers are limited to what can be computed. Turing was a clever guy that looked in detail at the basic operations of computers in general. He looked at the fundamental abilities of computers, he did not make assumptions about the power, speed or other performance attributes of computers, or how they are made and powered but rather the fundamental capabilities of a generic computer. Therefore, his results apply to any classical computer, whether in the past, the present or distant future.

Turing showed that every possible computation can be precisely specified by a recipe consisting of a sequence of simple steps. This is analogous to the man in the Chinese room who is following instructions in a book. This sequence of steps is called an algorithm; all computer programmes are algorithms. Anything that can be accomplished with an algorithm can in principle be accomplished eventually by a computer. Anything that can not be accomplished by an algorithm can not in principle ever be accomplished by a computer, no matter how powerful it is.

There was another clever chap called Gödel who like most famous mathematicians had his own theory; it is called Goedel's Incompleteness Theorem. I don't understand his theorem much. His theorem showed that no algorithm for proving mathematical truths can prove them all. This means that there are mathematical truths that are known to us, but can not be proven by a computer. We have arrived at these mathematical truths through human insight.

Penrose reasons that since there are mathematical truths that we have discovered, which we can prove are not discoverable by an algorithm, then there are mathematical truths we can discover that computers can not, regardless of how powerful they are. In other words, there are things we can do which computers can not do and this is 'proven'. And so a computer can not completely reproduce the capabilities of the human mind.

Penrose further offered a way out of this… that quantum physical processes may be able to go beyond what can be accomplished by a classical computer, can go beyond algorithmic computing. And his book explores this a little further but obviously without any proof since science does not know how the brain works in detail as yet.



If a computer or robot could be taught to recognize itself in a mirror, would it be self-aware? That should not be too hard a task for good A.I.

I suspect it's quite easy to program a robot to recognize itself - as in, be able to set a flag in memory to indicate that an image it captures matches a stored reference.
 
Last edited:
Is conciseness just a level of complexity?

Undoubtly..

The principle of life is inherently an electromagnetic force induced phenomenon,
hence only its complexity differentiate it from more common electromagnetic
processes that statisticaly have orders of magnitude higher occurrence as single
difference , the occurrence being inversely proportionnal (not linearly, of course) to complexity.
 
Disabled Account
Joined 2010
I suspect it's quite easy to program a robot to recognize itself - as in, be able to set a flag in memory to indicate that an image it captures matches a stored reference.

Yes compare an image..however when we look in a mirror when wearing a mask we know its us..so what are we comparing? (Is this me? who is "me") This is not my face but a reflection of me in a mask, however is it my consciousness looking at me?

So is this a reflection of my mind or a reflection of my image? How do I know the reflection is not really "me"..ie it is a reflection not me looking at me?
Or a person who looks like me looking at me?
I would know that an exact copy is not me..it is something else.. A mannequin is not a human..etc

When you look in a mirror is the image what you expect to see?<<yes sounds nuts however we are in constant change..a guy once said to me when he looks in the mirror he thinks (who is that old man looking at me) in his mind he has a personal image and its not old..

Regards
M. Gregg
 
Last edited:
Bigun's summary (post 28) of what Penrose said fits with my recollection of the book.

If we can do things which cannot be done by any algorithm, then our minds are more than software running on a biological computer. However, it may be that some of the things we do non-algorithmically can also be done by a clever algorithm - playing chess may be an example. The external phenomena (chess moves) may look identical; this is a potential flaw in the Turing test as it can only confirm the external appearance of intelligence.

Two non-algorithmic 'methods' we use for creative thinking in science:
1. using analogies - can, of course, be misleading when pressed too hard but often lead to insights; Maxwell was using mechanical analogies (which turned out to be complete nonsense) when he was developing his theory of electromagnetism.
2. inspiration - an idea simply seems to 'arrive' in our minds - benzene ring? DNA double-helix?

One interesting test to apply is precision: does less precision require less or more effort? Ask a computer to draw 5 parallel lines on a piece of paper and you need more effort (more programming) to get the lines slightly random or wonky. Ask a person to draw 5 parallel lines and you need more effort to get them less wonky.
 
Firstly, my apologies. I hadn't read this thread thoroughly when I posted earlier. Specifically, I didn't read this post - I should have responded here:
what we have here is the Turing test where the man in the Chinese room is a stand-in for a microprocessor: Turing test - Wikipedia, the free encyclopedia

And, wasn't it Sir Roger Penrose who claims to have shown there is great doubt that it will ever be possible for a Turing machine (digital computer) to perfectly simulate a human mind. There are mathematical problems that can be proven to be impossible to solve using an algorithm and have only been solved through human insight - this is an underlying tenant of his claim. Read the book "The Emperors New Mind".
I read the book many years (okay, it was literally decades) ago when it first came out. Somehow I don't recall the claim I bolded above (obviously I should reread the book!), but you explain it adequately below.

I recall his claim that neurons are affected by quantum mechanical interactions or fluctuations, but he didn't say quite how. I've read a claim from at least one critic of the book that such quantum mechanical actions are too small and have too little energy to affect the operation of neurons.
And another implication from his work is that the physical mechanisms that enable the human mind are therefore not algorithmic and our brains are not equivalent to a Turing machine - the only alternative is that there are quantum level processes at play which do not operate on an algorithmic basis and so operate outside the bounds of the Chinese room where instructions in a book are to be followed.
Yes, this is much more like I remember (and just described above) Even if THIS were true, I see no reason this couldn't be emulated by sensors detecting the atomic decay of a radioactive substance being detected by sensors which are connected to the computer. The computer could then be "non-algorithmic."

Such random input could perhaps even be made with a pseudo-random number generator (an algorithm that generates numbers that, while not TRULY random, pass statistical tests of randomness) as part of the computer's programming.
self consciousness

Is that the same as self-awareness?

If a computer or robot could be taught to recognize itself in a mirror, would it be self-aware? That should not be too hard a task for good A.I.

If not, why not?
There was this story earlier in the year about this specific thing - at least one "mainstream" news reports misinterpreted it as saying the robot had "consciousness" or "could think for itself" much like a human does:
Welcome to Yale University Graduate School of Arts & Sciences
I didn't find offhand the specific news report or the story (by someone more knowledgeable than the news reporter) pointing out its error.

So of course good definitions of these things are vitally important.

I've done some other reading in Artificial Intelligence over the decades. One story I recall (this might have been in "What Computers Can't Do") was how some anti-AI person was yelling at the others about how a computer couldn't possibly simulate human emotion. An AI proponent responded that the yelling person "displayed quite a good simulation of anger."

What is exceedingly difficult is recognizing "true consciousness" or "true thought" in others.
It's not my argument, I'm not familiar enough with the details of what Sir Roger was saying to stand in his place here. But I do encourage those who are interested enough, to read his book. I remember it as an interesting book to read and has some other interesting nuggets you might find equally fascinating.

Anyhow, the 'argument' as I understand it goes like this

If the brain were a computer, its powers are limited to what can be computed. Turing was a clever guy that looked in detail at the basic operations of computers in general. He looked at the fundamental abilities of computers, he did not make assumptions about the power, speed or other performance attributes of computers, or how they are made and powered but rather the fundamental capabilities of a generic computer. Therefore, his results apply to any classical computer, whether in the past, the present or distant future.

Turing showed that every possible computation can be precisely specified by a recipe consisting of a sequence of simple steps. This is analogous to the man in the Chinese room who is following instructions in a book. This sequence of steps is called an algorithm; all computer programmes are algorithms. Anything that can be accomplished with an algorithm can in principle be accomplished eventually by a computer. Anything that can not be accomplished by an algorithm can not in principle ever be accomplished by a computer, no matter how powerful it is.

There was another clever chap called Gödel who like most famous mathematicians had his own theory; it is called Goedel's Incompleteness Theorem. I don't understand his theorem much. His theorem showed that no algorithm for proving mathematical truths can prove them all. This means that there are mathematical truths that are known to us, but can not be proven by a computer. We have arrived at these mathematical truths through human insight.

Penrose reasons that since there are mathematical truths that we have discovered, which we can prove are not discoverable by an algorithm, then there are mathematical truths we can discover that computers can not, regardless of how powerful they are. In other words, there are things we can do which computers can not do and this is 'proven'. And so a computer can not completely reproduce the capabilities of the human mind.
Now I see the argument. It's interesting (I'm somewhat familiar with both Turing's and Godel's work, but don't recall Penrose putting them together in this way), but I'm not convinced, and I'm not sure why - there's something I find unsatisfying about it.
Penrose further offered a way out of this… that quantum physical processes may be able to go beyond what can be accomplished by a classical computer, can go beyond algorithmic computing. And his book explores this a little further but obviously without any proof since science does not know how the brain works in detail as yet.
I'm reading "he latest popular book on the topic, Ray Kurzweil's "How To Create A Mind." So far it has similarities to Marvin Minsky's "The Society of Mind."
I suspect it's quite easy to program a robot to recognize itself - as in, be able to set a flag in memory to indicate that an image it captures matches a stored reference.
Of course, a bigger challenge is to recognize that it really is a mirror image of itself, as opposed to another identical robot set up in a fixed position posing in an approximate mirror-image. But it need only detect its movements being reflected intantly in a complementary way.
 
AX tech editor
Joined 2002
Paid Member
Is "understanding" necessary for the phenomenon to have taken place?

Sorry to react so late: I don't think 'understanding' was claimed. To the external experimenter it'*looked* as if the man/machine/whatever inside the room 'understood'.
That is the crux of the Turing test: it is not possible to know whether the actions are performed by a human being or by a machine: the machine process is (for us) indistinguisable from human reasoning. However, the claim is not that therefore the machine has human intelligence. A small but important distinction.

jan
 
benb said:
I recall his claim that neurons are affected by quantum mechanical interactions or fluctuations, but he didn't say quite how.
It wasn't really a claim, more like a proposal. As I said, he was thinking out loud.

Even if THIS were true, I see no reason this couldn't be emulated by sensors detecting the atomic decay of a radioactive substance being detected by sensors which are connected to the computer. The computer could then be "non-algorithmic."
I don't remember the details, but he was suggesting rather more than merely using Monte-Carlo techniques - they are used already in computation. It was things like superposition, which disappear as soon as you start doing classical things.

I'm reading "he latest popular book on the topic, Ray Kurzweil's "How To Create A Mind." So far it has similarities to Marvin Minsky's "The Society of Mind."
I believe Minsky belongs to the school of thought which Penrose was criticising.
 
AX tech editor
Joined 2002
Paid Member
His (Gödel's) theorem showed that no algorithm for proving mathematical truths can prove them all. This means that there are mathematical truths that are known to us, but can not be proven by a computer. We have arrived at these mathematical truths through human insight.

Bigun, is the italic part your interpretation or Gödel's ?

jan
 
I think the original statement is something like any self-consistent system of mathematics must contain statements whose truth we accept and rely on, yet cannot be proved from the basic axioms of the system using methods contained within the system.

Interesting that the 20th century was the one which established limits to our knowledge: maths we can't prove, event horizons which hide information, quantum uncertainty which is more than merely lack of knowledge. Before that it was thought that we could, at least in principle although not in practice, know everything we wanted to know.
 
This is no experiment. The writer of the book, who is the one who setup the room and people, is where the understanding occurs. Put another way, the understanding occures outside of the situation itself. This is probebly easier to understand if you grew up with computers or understand system design well.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.