diyAudio

diyAudio (https://www.diyaudio.com/forums/index.php)
-   The Lounge (https://www.diyaudio.com/forums/the-lounge/)
-   -   The Chinese Room Thought Experiment (https://www.diyaudio.com/forums/the-lounge/173198-chinese-experiment.html)

DF96 18th December 2012 10:50 PM

Yes, Penrose was to some extent just 'thinking out loud' in that book, but I think he is right that genuine creative thinking involves more than just running an algorithm. What that 'more' consists of is still not known, yet we can all do it.

Pano 18th December 2012 10:56 PM

I remember once reading a magazine article about a fellow who had built a computer to play tic-tack-toe (naughts & crosses). He claimed that computers would never be able to play chess, it was just too complex.

Bigun 18th December 2012 11:06 PM

playing these kinds of games requires no insight per se, there are rules and algorithms can be written. It's not a question of claiming that computers aren't going to be powerful enough or that the limitation is because we are complex (well, some people like to think they're complex!). What Sir Roger said was that no matter how powerful a digital computer / Turing machine becomes, it can not in principle, ever replicate 100% the human mind because at the end of the day all it can do is run algorithms - insanely fast maybe, but that won't cut the mustard.

Pano 18th December 2012 11:08 PM

Yeah, that's what the fellow thought about computers playing chess. Is conciseness just a level of complexity?

benb 18th December 2012 11:48 PM

Quote:

Originally Posted by Bigun (https://www.diyaudio.com/forums/the-lounge/173198-chinese-experiment-post3290702.html#post3290702)
playing these kinds of games requires no insight per se, there are rules and algorithms can be written. It's not a question of claiming that computers aren't going to be powerful enough or that the limitation is because we are complex (well, some people like to think they're complex!). What Sir Roger said was that no matter how powerful a digital computer / Turing machine becomes, it can not in principle, ever replicate 100% the human mind because at the end of the day all it can do is run algorithms - insanely fast maybe, but that won't cut the mustard.

What is is about the human mind that (as I understand your argument) cannot be replicated by algorithms?

sofaspud 19th December 2012 12:18 AM

self consciousness

Pano 19th December 2012 01:17 AM

Is that the same as self-awareness?

If a computer or robot could be taught to recognize itself in a mirror, would it be self-aware? That should not be too hard a task for good A.I.

If not, why not?

Bigun 19th December 2012 01:34 AM

Quote:

Originally Posted by benb (https://www.diyaudio.com/forums/the-lounge/173198-chinese-experiment-post3290744.html#post3290744)
What is is about the human mind that (as I understand your argument) cannot be replicated by algorithms?

It's not my argument, I'm not familiar enough with the details of what Sir Roger was saying to stand in his place here. But I do encourage those who are interested enough, to read his book. I remember it as an interesting book to read and has some other interesting nuggets you might find equally fascinating.

Anyhow, the 'argument' as I understand it goes like this

If the brain were a computer, its powers are limited to what can be computed. Turing was a clever guy that looked in detail at the basic operations of computers in general. He looked at the fundamental abilities of computers, he did not make assumptions about the power, speed or other performance attributes of computers, or how they are made and powered but rather the fundamental capabilities of a generic computer. Therefore, his results apply to any classical computer, whether in the past, the present or distant future.

Turing showed that every possible computation can be precisely specified by a recipe consisting of a sequence of simple steps. This is analogous to the man in the Chinese room who is following instructions in a book. This sequence of steps is called an algorithm; all computer programmes are algorithms. Anything that can be accomplished with an algorithm can in principle be accomplished eventually by a computer. Anything that can not be accomplished by an algorithm can not in principle ever be accomplished by a computer, no matter how powerful it is.

There was another clever chap called Gödel who like most famous mathematicians had his own theory; it is called Goedel's Incompleteness Theorem. I don't understand his theorem much. His theorem showed that no algorithm for proving mathematical truths can prove them all. This means that there are mathematical truths that are known to us, but can not be proven by a computer. We have arrived at these mathematical truths through human insight.

Penrose reasons that since there are mathematical truths that we have discovered, which we can prove are not discoverable by an algorithm, then there are mathematical truths we can discover that computers can not, regardless of how powerful they are. In other words, there are things we can do which computers can not do and this is 'proven'. And so a computer can not completely reproduce the capabilities of the human mind.

Penrose further offered a way out of this… that quantum physical processes may be able to go beyond what can be accomplished by a classical computer, can go beyond algorithmic computing. And his book explores this a little further but obviously without any proof since science does not know how the brain works in detail as yet.



Quote:

Originally Posted by Pano (https://www.diyaudio.com/forums/the-lounge/173198-chinese-experiment-post3290802.html#post3290802)
If a computer or robot could be taught to recognize itself in a mirror, would it be self-aware? That should not be too hard a task for good A.I.

I suspect it's quite easy to program a robot to recognize itself - as in, be able to set a flag in memory to indicate that an image it captures matches a stored reference.

wahab 19th December 2012 01:42 AM

Quote:

Originally Posted by Pano (https://www.diyaudio.com/forums/the-lounge/173198-chinese-experiment-post3290706.html#post3290706)
Is conciseness just a level of complexity?

Undoubtly..

The principle of life is inherently an electromagnetic force induced phenomenon,
hence only its complexity differentiate it from more common electromagnetic
processes that statisticaly have orders of magnitude higher occurrence as single
difference , the occurrence being inversely proportionnal (not linearly, of course) to complexity.

M Gregg 19th December 2012 08:14 AM

Quote:


I suspect it's quite easy to program a robot to recognize itself - as in, be able to set a flag in memory to indicate that an image it captures matches a stored reference.
Yes compare an image..however when we look in a mirror when wearing a mask we know its us..so what are we comparing? (Is this me? who is "me") This is not my face but a reflection of me in a mask, however is it my consciousness looking at me?

So is this a reflection of my mind or a reflection of my image? How do I know the reflection is not really "me"..ie it is a reflection not me looking at me?
Or a person who looks like me looking at me?
I would know that an exact copy is not me..it is something else.. A mannequin is not a human..etc

When you look in a mirror is the image what you expect to see?<<yes sounds nuts however we are in constant change..a guy once said to me when he looks in the mirror he thinks (who is that old man looking at me) in his mind he has a personal image and its not old..

Regards
M. Gregg


All times are GMT. The time now is 04:36 PM.


Search Engine Optimisation provided by DragonByte SEO (Pro) - vBulletin Mods & Addons Copyright © 2018 DragonByte Technologies Ltd.
Resources saved on this page: MySQL 17.65%
vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2018 DragonByte Technologies Ltd.
Copyright ©1999-2018 diyAudio

Wiki