« Nevada County ARRA Still on Ice | Main | California 1849 - 2009 »

11 July 2009

Comments

kad

George,

The Turing test is indeed a reasonable operational definition of intelligence. And Dr. Ayesh gets it completely backwards when he suggests that it is a measure of consciousness. However, I think he is on the right track here in his unfortunate attempt to de-emphasize the test.

The importance of the Turing test has been exaggerated beyond all measure by the Singularity believers, why typically conflate it with machine consciousness and the transcendence of biology by machine. Case in point: Kurzweil (The Singularity Is Near, 297), who leaps from the passing of the Turing test to "[intelligence] non-biological - capable of comprehending and leveraging its own powers."

A passer of the Turing test could presumably "leverage" its own powers. And I take it as a given that silicon based computers will pass the Turing test and surpass human inference in many areas, but that has nothing to do with "comprehending" its own powers. Comprehension in this case is consciousness comprehension, and indeed machine consciousness rather than just intelligence is required to say that machines have generally surpassed biology (or else they have not surpassed it in this area). Certainly the meaningful immortality that Kurzweil fervently prays occurs through downloading of human minds into hardware requires consciousness in hardware (is there any point in your mind living on if it does not consciously experience this?).

So the interesting question is not the Turing test, but rather machine consciousness, of which the test says little to nothing. Indeed, machine consciousness may well be impossible to test - I do not even know for sure that any human other than me is consciousness, but that does not make it less interesting. And Occam's criteria tells us not to ascribe consciousnesses to the passer of a test that merely requires intelligence. The vague hope that consciousness will arrive as a systems effect of enough intelligence is about as rigorous as your randomly chosen religion.

In fact it seems that consciousness and intelligence may be close to orthogonal. Certainly the achievements of current AI suggest that the latter can occur without the former - common sense (though no testable due to lack of any test) says that AI has made no, as in not one iota of, progress towards consciousness despite some nice moves towards intelligence. Similarly, deep meditative states can be profoundly alive with consciousness yet absolutely devoid of thought or any externally apparent environmental processing.

IMHO, consciousness in solid state devices is a pipe dream not dissimilar to those of the cognitive scientists of a century ago who believed that the mind could be modeled with a hydraulic metaphor, as that was their reigning technology.

This is not to say that humans will not create conscious entities. But I would bet dimes to dollars that this will happen only after our information processing paradigm has shifted towards biological systems at the least and probably towards quantum effects within such. The singularity of the Kurzweil dream, of the total superiority of created being to human being, of potential immortality, is not even close, and when it comes it will be more like us creating enhanced versions of ourselves rather than us replacing ourselves with solid state machines. Singular beings will look on us like we do on other mammals, not at as a totally different form of life.

George Rebane

Agreed. Great comment Keith.

On the orthogonality of intelligence and consciousness, specifically as it relates to the TT, do you believe that an intelligent human interrogator can devise a line of questioning that would reveal the existence or absence of the respondent’s consciousness? (Until a better one comes along, I am still using Julian Jaynes’ definition of consciousness. http://en.wikipedia.org/wiki/Julian_Jaynes)

In short, I’m not struggling with the notion that a critter can be conscious and not be intelligent in any knowledge domain specific way (several domains in which machines are already more intelligent than any human). But I am not sure that, within the TT-specified process, the successful machine revealing its intelligence in the guise of appearing human, would not also convince the interrogator of its attendant consciousness (and therefore, presumably, being conscious).

Since we can never directly experience another’s consciousness, we are reduced to assessing the reliability of our test(s) of another’s consciousness – i.e. our belief. This proposition seems to hold until we can find an operational foundation for consciousness such as, perhaps, suggested by Penrose in his description of the quantum events which take place in the brain’s inter-neuronal synapses. This would be a different basis than assuming that the rise of consciousness needs only a sufficiently complex substrate.

Thoughts?

kad

George, tried to post this at yesterday, but the system refused to allow it.

The machine passing the TT would by definition force the interrogator to admit that s/he does not have sufficient data to determine which of the conversation partners is consciousness and which is not. The human conclusion, however, bears no necessary relationship to reality.

My conjecture is that with enough intelligence and data it should be quite possible to simulate all externally verifiable aspects of consciousness without actually possessing such.

This of course means that the status of consciousness itself is not an empirically testable fact, so yes we are reduced to assessing the imperfect reliability of our tests. Perhaps at some point something along the lines of what Penrose suggests, or something related to observer collapsed wave functions, or some technology (or self training regime) that actually lets us experience the consciousness of another will save the day.

Some will then conclude that we should therefore accept the presence of consciousness based on the best imperfect test we can devise, or that consciousness is but an epiphenomenon. I am not in that camp because every test I can devise is so far from being conclusive that it cannot begin to overcome the priors. I do not think that, in the end, Kurzweil will be in that camp after he has downloaded his thought patterns into a solid state machine to gain immortality, only to find (though he would not be aware of it) that he had effectively negated himself because his thought patterns do not know they exist (even though perhaps no external observer would know this).

George Rebane

Keith, given that the existence of another entity’s consciousness or a means of testing for it does not (yet) exist, why would “the machine passing the TT force the interrogator to admit” anything about the machine being sentient/conscious? Since the TT was designed only as a probabilistic (i.e. unreliable) detector of sapience/intelligence - with the constraint that behind one of ‘doors’ was a human and the other, a machine – the interrogator can only correctly ascribe sentience to within a probability. And this only with the presumption that at least the human is sentient. But in any event, I do agree that “the human conclusion, however, bears no necessary relationship to reality.”

Agree with the remainder of your comments.

Apropos to your last assertion re Kurzweil, I have for years held the belief that a perfect fidelity xyzt-space duplicate of a person is not also a duplicate in the sentience sense. Subjectively you/I will rebel against such a notion as considered with the help of my ‘357’ thought experiment wherein you/I as person A, holding a 357 magnum, are left alone in a room with the above described duplicate B. For a number of obvious reasons, only A or B should leave the room alive (assume a reliable colleague will undetectably dispose of the remains). After some, perhaps extended conversation, A is satisfied that B is a perfect duplicate of himself, and would not be detected as a copy by A’s closest relatives and friends should B be the survivor of this experiment.

I maintain that A would never consider letting B survive their encounter and will make it so. Thereby A will always reject the belief that ‘he’ would survive in the embodiment of B – i.e. that A’s essential sentience has not been transferred in an experiential sense even if B were also sentient. This was presumably confirmed during their private meeting since A could experience the world only through his normal ‘self’.

Now if A were to be morphed into a duplicate B or another embodiment bit by piece, so that there would occur no predictable discontinuity of sentience, then this problem might be avoided. However, one would never know if one of the small morphing steps would cause A to ‘blink out’. If sapience transfer continued, would we be able to detect that now the morphing being no longer possessed the sentience of A?

(BTW, this discussion thread could continue under the 20jul09 ‘Singualrity in Ten Years?’ post which deals specifically with questions of machine sentience.)

The comments to this entry are closed.

Blog powered by Typepad