Claim your $1M from the Great Randi

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
AX tech editor
Joined 2002
Paid Member
Prune said:
[snip]Human reasoning is perfectly understandable when you realize that humans are boundedly rational, which means having to take into consideration time and mental resources available, and make the use of heuristics which enormously speed up processes but have a chance of failure (ineed this is all too common in today's world, as these were evolved for optimality in a tribal foraging society). [snip]

Indeed, I fully agree, in the sense that the laws of physics apply to this universe, including human reasoning. What I was driving at is that the process of reasoning has long been thought to be somewhat analogous to a computer or indeed a Turing Machine. That now appears to be much more dynamic and can evolve and change as a result of external pressures. It's like a TM that changes it's rules midway, to get faster to results. It still would be bound by the physical etc laws, would still be understandable in terms of its (logical) processes, but unless the midway change was predictable (and I pose that it wasn't), we cannot predict the way it eventually gets to it's results.

Prune said:
[snip]
While computers are structured very differently from brains, and implemented on totally different substrates, physics and information theory are the fundamental limits of both.

Agreed, but that was not my point.

Prune said:
[snip]
Even the latest humans, which look a lot like the previous versions and 'seem' to create solutions that evolution has not foreseen, only can act within the boundaries of their biology and sociology.

Yes

Prune said:
[snip]
Sure there is. The program is non-deterministic, sure, but it is still computational -- it is the program we call the laws of physics, with initial conditions the data on which it runs. Non-determinism can be added to computers by replacing pseudorandom number generators with ones that are truly random by monitoring the product of quantum events (say a resistor's Johnson noise). But in formal logic that does not increase computational power. There are proofs that both FSA and Turing Machine-equivalent automata have identical power whether they are non-deterministic or not.

Again, agree, this was not my point, which I clearly made poorly. I think the issue here is predicability, on two levels.
Firstly, the reasoning of humans can not be predicted, as it changes dynamically and has indeed stochastic elements in it. On the second level I think the question is, can we built stochastic and/or ramdom 'processors' that can be expected to - roughly - process information and come to the same or similar results as a human would, in everyday life. In other words an automatum that cannot be distinguish from a human being in everyday life, not just chess or calculation.
Is that the right way to pose the question?

Jan Didden
 
Disabled Account
Joined 2003
What I was driving at is that the process of reasoning has long been thought to be somewhat analogous to a computer or indeed a Turing Machine.
At a high level, it is not. But the idea of a TM is not a high level thing either, but a mathematical abstraction.

That now appears to be much more dynamic and can evolve and change as a result of external pressures.
Formally, recently it was shown that looking at a body interacting with its surroundings as it performs information processing, it can indeed not be encapsulated by a TM even if when isolated from the environment it can. However:
1) This is because you are not taking into account the environment. The whole interacting system (again, you don't need to take the whole universe, just use light cones to bound it, and indeed any real systems are even much more practically bound in that respect) comprising of that body(ies) and environment, is still mappable to an automaton.
2) Any device can be made to behave thus by adding interaction with the environment. That immediately makes any computer connected to the Internet and/or receiving user input appear, if you do not include the environment in your formalism, as super-Turing, just like a human does. Now, whether from this restricted view that looks at just part of a TM (the whole system), and makes it look super-TM, whether this super-TM retains the TM equivalence of its non-deterministic and deterministic versions, I do not know. But in any case, it doesn't matter as the non-determinism can easily be added to a computer.

unless the midway change was predictable (and I pose that it wasn't), we cannot predict the way it eventually gets to it's results.
As I said, the same applies to a computer. User input is generally not predictable. This does not change the picture because the user is a human; any input from the outside counts. And if you insist that non-determinism that matters is internal, either a) consider the whole system, as above, or b) use Johnson noise to generate random numbers in the computer.

the reasoning of humans can not be predicted, as it changes dynamically and has indeed stochastic elements in it.
Again, it is trivial to add this to a computer, and most computers already get it in the form deterministic pseudo-random generators in stochastic algorithms (as good as real randomness), and of user input and other communication with the external world.

On the second level I think the question is, can we built stochastic and/or ramdom 'processors' that can be expected to - roughly - process information and come to the same or similar results as a human would, in everyday life.
In theory, yes, but in practice it is extremely difficult. Now we have to distinguish between AI that has some sort of intelligence that may match or exceed that of humans, and AI that emulates humans fully. The second case is much harder, due to the complex factors that shape what's in your head. Essentially, you'd have to create a baby and grow and nurture it like a real human, unless you are certain that you can somehow build in all necessary memories and make them seem consistent and realisic to the entity, and so on. Basically, you are just implementing a human from scratch, so why not make it even easier and use organic building blocks instead...you can see where I'm going with this...what a about a cyborg, how much hardware can you add to the brain before it ceases being human? -- there is no line here that can be crossed. As for the first case, some can argue it already has been done. There exist for example extremely complex applications of AI in, say, data mining, which find trends and patterns that would boggle the mind of a human. It's extremely specialized, but still the best way to put it is that the software gets a form of understanding of the data, albeit a somewhat different one than a human would, as the findings have no connection to the context that the human has.

In other words an automatum that cannot be distinguish from a human being in everyday life, not just chess or calculation.
Is that the right way to pose the question?
I will reply to this in the next message with a thought experiment.
 
AX tech editor
Joined 2002
Paid Member
Prune,

Interesting post. Rather than quoting ad infinitum, let me give some comments to what I see as the main point of difference between our views.
You make the point that you need to take account of the environment as it is arguably part of the system. The human input to a computer is also not predictable, you say, so that is analogous to the unpredictability of the human environment.

I fundamentally disagree with that. I think it is necessary to view this from the pov of the observer. Systems or whatever have no value unless observed, indeed it is the act and method of observation that gives meaning to systems and events. If you close your eyes, who is to say that in that instant the universe doesn't disappear, only to appear again the moment you open your eyes?

Extremely simply said: looking at an object makes that object 'exist' or 'significant' as something with a color and shape.
Touching an object makes that object 'exist' or 'significant' as something with a texture, temperature etc. I know it is a very simple example but I hope you understand what I mean.

Now back to the original discussion. I as a human observer I am not part of the environment of the computer, as the environment is part of a human being. For that computer, I AM predictable, because it is ME who decides what inputs to give.

In the case of the human 'system' there is no external observer. I as a human (not in my role of observer) am enclosed in this system that ultimately includes the universe; a universe that is non-deterministic and cannot be predicted because of that.

So, including the environment doesn't make any difference in this respect. (In fact, it is a scary situation and may partly be responsible for the construction of religions). My conclusion would be that since this human system cannot be predicted and determined, it can therefor not be mimicked by any human construction.

You may even say that any human is extremely unique, and the ONLY reason why they seem to be so much alike is because that similar behaviour is needed to survive in this universe. They are all very uniquely different systems competing for survival and therefore, because of the physical, etc laws, NEED to show at least SOME standard reactions to the environment. But not because they ARE similar. In the extreme, you can say that even a human cannot predict or mimick a human.

I'm not sure I am clear; heck, I'm not sure I understand it myself, but it is very interesting matter.

Jan Didden
 
Disabled Account
Joined 2003
So, continuing...
In other words an automatum that cannot be distinguish from a human being in everyday life, not just chess or calculation.
Is that the right way to pose the question?
Consider the following (this is not at all original, and is probably an old idea):

Take a human and replace some neurons, one by one, with artificially created devices that can replicate each neuron's I/O. Individual neurons have already been simulated years ago at the electrical level; there are several pieces of software you can find to download, but the most detailed simulations run on a supercomputer (though these usually are too detailed, dealing with bilogogical factors that do not influence the information processing purpose of the cell; neurologists find sufficient fairly simle models that are even more abstract and do not consider electrochemistry details at all, which are regarded as mere implementation issues). [Side note: already silicon substitution for a part of the hypothalamus, as far as I remember, has been tested in a monkey, or something like that, I forgot the details, but they were planning human testing. Search for 'brain prosthesis' at the New Scientist.]

Now, with several neurons replaced, is this person still himself? What if you continue the process? If each replacement part behaves as it should, replicating the action of the neuron it replaced, then even without an understanding of the complex dynamics of the whole brain, you are essentially building an artificial brain. Now, this process is not very useful, as you did not gain any knowledge (though you could use it to deal with, say senile dementia or other neurodegenerative disorders, you could just as well use stem cells, which is more practical; more interesting would be trying to increase the capability of the brain, or move the mind to other hardware, etc.). But the point is, you have created artificial hardware on which the original mind (i.e. the information that the original brain represented) is now running.

Before I finish, let me add a Twilight Zone twist. Consider the fact that, having carried out the above procedure, what you have done is equivalent to the following: do a non-invasive uber-scan of the person's brain, reconstruct him from the artificial devices at once, and destroy the original. Ouch.

The problem here is that we regard our existence with the assumption of continuity. By quantum uncertainty, the event of a person disappearing into nothing then appearing some time later exactly the same is incredibly unlikely, but in an infinite universe it probably happens somewhere, even on a duplicate of Earth (check out Tegmark's discussion of various levels of multiverses; he's a respected scientist and not a crackpot, and he justifies why these are inevitable). There is no resolution here, because in the end the difference between the original and new one is purely subjective. But that is what this whole discussion has been about. I'm not the one to deride the subjective, because all things that matter in the end are subjective -- feelings, awareness, understanding, morals, ethics, qualia. You can explain how they arise and map them to the physical universe by examining their neural correlates, and study how biological and social evolution shaped them, but that can't give them objective meaning -- there is no such thing. You cannot justify any given set of morals using rational arguments, just study why they formed. The alternative is called religion. This is why cultural relativity is inevitable in scientific study of culture (anthropology). But all is not well here, because in this century things such as the though experiment I outlined above may come within our practical reach (they are already theoretically). And then, you see the difficulty -- how can you morally deal with such issues when they are outside the scope of our experience, and outside the scope of our moral systems? One may say, morals/ethics will adapt, but I say, how can you justify changing your system of morals when the basis of that justification is this system? So any such change is ultimately arbitrary.

Of course, this process is gradual and people simply ignore it. But it bothers me, nonetheless. This is an issue where I part with most of the scientists, including those I look up to in such issues, like Steven Weinberg and Richard Dawkins. They just don't see it as a problem. I find it as a problem partially because rationally, if morals are arbitrary, one can adopt a moral system whereby, for example, the killing of children is good (could Mengele have thought thus?).

The causation of what we do and do not do can be fully described by three factors: biological, upbringing, and (quantum) randomness. Where's the free will here? No proof there isn't one, but the burden of proof lies on those claiming a more complex explanation. A lot of scientists equate free will with an inability to know one's destiny; this is the case even in a deterministic world, becaue when you try to predict yourself, you need to also predict yourself predicting yourself, ad infinitum. But to me that's no consolation.

You see folks, I don't hold these views because I'm a stubborn b@stard that just believes what he likes. I don't like it. I mean, I would just love for there to be magic, and heaven, and absolute meaning, but cannot but be honest to myself in admitting that I cannot find anything to support such a world view.
 
Jan,

They are all very uniquely different systems competing for survival and therefore, because of the physical, etc laws, NEED to show at least SOME standard reactions to the environment. But not because they ARE similar. In the extreme, you can say that even a human cannot predict or mimick a human.

I think it at least equally true, or perhaps even more the case, to say we cooperate for our survival. Looking around at the world, it seems to me there are magnitudes -yes! Magnitudes- more of cooperation than competition.

This suggests your foregoing description is incomplete.

For example:

I think it is necessary to view this from the pov of the observer. Systems or whatever have no value unless observed, indeed it is the act and method of observation that gives meaning to systems and events.

Extremely simply said: looking at an object makes that object 'exist' or 'significant' as something with a color and shape. If you close your eyes, who is to say that in that instant the universe doesn't disappear, only to appear again the moment you open your eyes?


The world, or at least your role as observer, does not disappear when you close your eyes. Because you still hear it and you still stand on it. When you sleep, you lie on it.

We are physical creatures. Although we are observers, we are better described as participant/observers. Our subjectivity is physical and we know this because we can change it by physical means.

The world, the universe, is the ground on which we stand and the ground on which all things appear. That is the subjective bedrock. Of course we can't value things until we know about them but we spend our lives finding things to value in some fashion, don't we? Don't most of us have an expectation that things we don't yet know about might appear to us?

Extremely simply said: looking at an object makes that object 'exist' or 'significant' as something with a color and shape.

This is not true to subjective experience, or to science. We see things as persistant. ("Didn't know about it and then I stumbled over it." Even if it falls from the sky, like a meteorite, we know it existed before.) We, like our world, exist in time. Some of the objects of our subjectivity such as '2' don't have physical correlates (although I'm pretty sure we derived them from our physical activity) but even then we discover them. At risk of sounding like a political operator, there are all sorts of things we know we don't know about (yet, perhaps never).

This is true:

indeed it is the act and method of observation that gives meaning to systems and events.

But is this?

In the case of the human 'system' there is no external observer. I as a human (not in my role of observer) am enclosed in this system that ultimately includes the universe; a universe that is non-deterministic and cannot be predicted because of that.

Some, not all aspects of the universe are unpredictable: for instance some other people may be so; the storm may change its track; meteorite may land; your body chemistry may go awry. But generally, the universe is predictable.

Our human 'system' is us, as participants/observers/world.

The aspects of this system we often don't include when describing it, are time and its subjective analogue, memory, and the size and variety of the universe. Because of time, error correction is possible. Because of memory, we may be observers and even, to a degree, be like "external" observers. We are not limitless in our boundaries or abilities but they're not particularly fixed, either: because of the universe's size we may "stand" in various parts of the world and look at other parts, and we can do this an immense number of times, taking "slices" or "snapshots" and put them together any way that makes sense. That's what we do, individually and collectively, isn't it? (Not all ways of putting it together are equally 'good', in every sense of that word).

What we have are some defective views of a "external" observer to rely on. And, although they are defective, as in incomplete or even downright wrong, many of them are nonetheless usable.

At risk, no, for sure, of sounding like a moronic version of Wittgenstein, we can map from the divine algebra to the profane geometry or we can map from the profane algebra to the divine geometry. But it's going to be a long while before the two processes give symmetrical results.

I think this is interesting,

In the extreme, you can say that even a human cannot predict or mimick a human.

Yes, but you are looking at the extreme, where the humanity has been either damaged by cranial trauma or or by defects in nervous construction. And even in the latter case we have to careful - there are folk with intellectual deficits who are extremely human. Whilst there are others who, not lacking intellectual endowment have no traces of human sympathy and whose brains don't operate (as seen on MRI scans) like those of ordinary humans.

I'm not sure I am clear; heck, I'm not sure I understand it myself, but it is very interesting matter.

Me neither. Sure is.
 
Prune,

Two books come immediately to mind when reading your posts... The Fabric of Reality and Is Data Human?

Both fascinating books. My guess is you have read them. Like those books, your arguments are difficult to find fault with. I side with the majority who have found Penrose's postulate lacking, and I also find the logical conclusion of our current understanding of reality a bit unsettling.

The thought experiment about replacing neurons is fascinating in itself, especially when you consider all the variations and modifications. Would you allow all your neurous to be replaced, slowly, by silicon equivalents? Personally, the idea doesn't seem very disturbing. At each point you have lost, in theory (perhaps not in practice), only a single original neuron. These die all the time, and I hardly think of my existence as having ceased because of it. If it is replaced with an equivalent, instead of being lost entirely, is this not retaining more of myself on a daily basis?

Continuing this intriguing thought experiment, does the pace of replacement matter? Would you allow half of your brain to be replaced in one session? What about the whole thing? What if some transportation technology required your brain to first be replaced by a silicon equivalent, as discussed above. To make things easier, assume that you are allowed to have a very slow replacement done in preparation. When the time comes to take your trip, you are placed in stasis at your departure location and awake at your destination. Does this seem disturbing? What if you discovered that what actually happens is that only the information contained in your brain is sent to the destination... thus the reason for a silicon replacement in preparation (presumably to facilitate the reading of information at time of transit).

Does it matter to you that your physical body is left behind? Going further along this disturbing path, does it matter what the actual sequence of events is? If the process of reading your current state of mind (ignoring quantum problems in doing so) rendered it essentially destroyed, the process could be viewed (if one so chose) as a transportation of the physical (though it would not actually be so). Perhaps that is somewhat comforting. What if the reading of the mind did not destroy it? What if you were simply kept in stasis after the reading, and then humanely "put to sleep" in a more permanent way? What if you were allowed to wake first, only to find a firing squad ready to dispose of this "left behind" you? At this point, I'm sure I wouldn't be willing to take the trip! :D

But at what point does our current moral system cause us to balk at the idea? And for what reason?

I think the concept of the multiverse is crucial in tackling such philosophical problems, but even so I am not sure what my final position is on such questions.
 
Disabled Account
Joined 2003
Ah, someone seems to agree with me here, amazing! Most people disagree; my grandmother even accused me of being an extreme technicist.

My guess is you have read them.
No, but I've heard of the first one.

I side with the majority who have found Penrose's postulate lacking
Not just lacking. His argument has been formalized and refuted -- see here.

But at what point does our current moral system cause us to balk at the idea? And for what reason?
It is exactly the lack of a definite point that's the cause -- the individual has been deconstructed, and when you take the I apart, it's not there anymore.

I think the concept of the multiverse is crucial in tackling such philosophical problems, but even so I am not sure what my final position is on such questions.
Here is Tegmark's paper on the four levels of multiverses. Level I is due to spatially infinite universe, so it contains an infinity of Hubble volumes "realizing all initial conditions" (in a finite region of space there are a finite number of different possible configurations due to QM). Level II is due to separate inflation regions. Level III is due to QM many-worlds interpretation. Level IV is due to nonsensical mathematical platonism.

You can't really argue with the first two, but the third is not certain as other interpretations exist, and the fourth is plain stupid. It basically goes like this: because physics is described by a subset of mathematics, then other mathematical models corresponding to other physics can be constructed, and since there is no reason one such model would be more likely to correspond to some physical reality than any other, there must be an infinity of physical realities corresponding to any possible mathematics.

This silly view derives from mathematical platonism, a form of religion common among math types that in essence proposes a world of absolute mathematical truths that somehow has its own reality (and the implication is that mathematicians have some sort of mental access to it), and that's why we can think of mathematics that does not correpsond to any physics.

Mathematical platonism is a religion that the math types use to justify to themselves spending most of their time working on mathematics that has no application to anything in the real world. It is easily shown unnecessary as an explanation by the following trivial observation: all mathematical thought can be mapped to the physical universe through the neural correlates of said thought. Simple as that.
 
I think the only undeniable proof that exists to conform towards ESP existing is the fact that we haven't denied that it exists off the bat.

*deep breath*

And to do the opposite, oppose it's existance, is foobar.

This could be well reguarded as psychological proof, undefined common beliefs in all of us.

A demonstration exuberating this fact through the speeches of a large group of people, MAY, at least, put one person over the top.

Ie. Indirectly brainwash the sob.

I hope you guys have had some fun =P

P.S. I didn't actually read the thread.
 
Pinkmouse

You may be joking but i've had this kind of experience more than once. In my early attempts to build a MC preamps i've had lots of unsuccessful attempts with lots of thermal/shot noise as a side effect. Still, the sound was very good with lots of microdetail standing out from the noise. Once the noise was cleared the 'microdetail' also disappeared. Of course i always assumed this had to do with using different active devices...

Could it be the perceived resolution of LPs has also something to do with the background noise? Just kidding :)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.