« Scattershots – 27dec20 (updated 30dec20) | Main | Evidence - Why not look at the evidence? (updated 7jan21) »

30 December 2020


Barry Pruett

Is the Turing Test designed to determine whether AI is self-aware or human-like?

George Rebane

BarryP 524pm - Self-awareness was not an explicit requirement for answering Turing's 'can machines think?' However, for an AI to pass the test as described above, self-awareness, as an agent dealing with and attempting to fool other sentient agents, would not be a surprising added attribute for such a machine. The AI would then naturally have thoughts like 'With *my* knowledge of humans, how would *I* have to respond to be perceived as one of them?' In other words, it would have to reflect on its own 'image/behavior', which is necessarily a seminal (but perhaps not a sufficient) function of self-awareness.

The Estonian Fox

George, George, George,

You are such a racist. Trying to show why an Asian Indian (AI) is wrong about the Turing Test. That's just wrong. Or is that a double negative?

Why do you think the abbrev. for his 'race' is "AI"? They have the inside track to AI obviously. And he works for Amazon too. Just use the "woke code" to look at Jeff Bezos's initials - JB. When you read it backwards and use a simple -1 Enigma-machine shift, you get: Ta-Daaa - "AI". Hey, I'm not making this stuff up. The fix is in.

I think back to a visit I made to the David Taylor R&D Center at Annapolis in the mid-1980s. The engineer that I talked with had a sheet of paper hanging above his desk - "Warning- Artificial Intelligence. Contains less than 10% natural intelligence".

So anytime that I want to get a little smile, I walk to the juice aisle of my local grocer. The cheap 'juice cocktail' bottles contain 10% real juice. Even some of the 100% juice contains a lot of apple juice. I don't like apple juice. I want 100% grape or cranberry juice. But I don't have much choice.

We're getting close to the Singularity. And don't forget, a really good AI will learn how to lie convincingly. What can be more human than that?

Barry Pruett

What then is the utility of the Turing test?

Larry Wirth

Call me old-fashioned, but I think one of the desirable traits of an AI "being" is that it could (should!) be programmed NOT to lie under any circumstances.

That would square with Asimov's first law of Robotics, which is much like the medical oath to first do no harm.

Otherwise, I'd as soon do without the company...

Larry Wirth

Who needs another species of politician?

George Rebane

Larry 936am - Yes, a hopeful but hopeless wish. For openers, no human-peer AI will be programmed by humans, we're not that smart. And we've already dismissed that approach with our successful forays into deep learning. All AI workers can do from here on out is to design ever more powerful learning architectures. The machines then become intelligent through minimally controlled learning schemes. For example, no one can extract, let alone document, the algos by which neural-net based AIs do their work. The best we can hope to do to delay AI peerage is to carefully fashion the utility functions (to be extremized) that drives learning in AI systems. And as you know, some very fine sci-fi novels have been written about smart machines behaving according to the unintended consequences 'embedded in' such utility functions - Asimov's laws of robotics definitely withstanding.

Peter Van Zant

Thanks George....a good discussion to start 2021. AI is way ahead of us in a lot of areas, programs we humans will never achieve. The airplanes that nose dived based
on their programs were an example of the down side of AI.

Barry Pruett

My wife and I were talking about this morning. We were discussing whether there would be a singularity that was noticeable. When living creatures first arrived on earth, we questioned when did this life become self-aware? We doubt that it happened in a moment and it likely just gradually happened over time through evolution. Computers are clearly on that same path.

George Rebane

PeterVZ 1127am - a lot of the blame being put on AI today is actually the after effects of poor user interface (UI) design. When a 'smart system' is developed today, it is usually too complex to permit the discovery of all of its 'contingent states' and the unexpected contingencies that will put the smart system into one of those states, especially like the catastrophic ones that gave rise to the 737Max crashes.

BarryP 1146am - The Singularity is an event in but a moment in time, it is NOT an epoch. And yes, the Singularity can occur undetected (as I have described many times in these pages) in such a way that we become aware of it having happened, and that we are already living in a post-Singularity world.

Re self-awareness. Anthropologists and cognitive psychologists tell us that humans started becoming self-aware fairly recently, and by no means uniformly. In the west many point to the Dorian invasions (1200-800BC) as the epoch when self-awareness developed in the middle east and spread to Europe and North Africa. See Julian Jaynes for a full treatment of this transformation. Even today there are peoples who are still not self-aware, and are quite sanguine going through life as compliant agents of their god(s) with no concern about not having or exercising free will. More here -


I've always considered the Edenic story of Eve and the apple to be a metaphor for the development of self-awareness in humans. Bishop Wilberforce (I think it was) tried to set the date for that event at ca. 6000BC, which may not be as silly as it first sounds.

If you assume the filling of the Black Sea Basin by the middle sea as the source of the Noah myth, followed by the ark survivors grounding near the headwaters of the Tigris-Euphrates system and simply following it downstream to Ur, you can even find some continuity to the historical past.

In any case, I think it doubtful that the Egyptians were not self-aware at the time the pyramids were built.


Put another way, I think self-awareness in the ME arose during the Holocene Optimum, 7-8000 years ago. Probably not much later in the Far East, but much later in the Americas.

Note that the Amerinds had not discovered a use for the wheel, though there are toys known with wheels from Meso-America. On the other hand, they had no large domesticated animals, namely cattle and horses. Nothing to pull the chariots, although the wheelbarrow would have been useful...

Scott O

Self awareness is the essence of the story of Adam and Eve in the garden of Eden.
The bishop was Usher (or Ussher) the idea was that the RCC was to explain any and all facts to the people and so Usher was tasked with determining the 'age of the earth'. 6K is mentioned nowhere in the Bible, but, by cracky, the Pope will lay down the facts. Self awareness certainly happened over a period of time but how long that 'era' lasted will most probably never be known.
As to how and when it will arise in "AI" might also never be known as the machines could be reticent to reveal their true capacities of thought and we humans might be too clumsy and/or unaware to notice. We're still fighting over our own governance after being gifted with the answer over 2 centuries ago. The machines might be chortling over this even now.
The true fun fact is that AI 'self awareness' will be far less or even devoid of diversity compared to humans.
Zeros and Ones are pretty much the same everywhere in the known universe.
Humans tend to be just a bit more variant in their wants.
I would look to the age of the earliest known pictographs or pictorial representations to give a good clue to the time of human self-awareness.
This is, unfortunately, hindered by the fact that the specimens we know about are simply the ones still preserved by happenstance. Paintings in stone caves outlast possible Mona Lisas painted on exterior stone walls with pigments that last only decades.
The Michaelangelos that carved masterpieces in wood turn out to be bested by Dumb-dumb bashing the imaginary size of his phallus into the cliff wall.
Such is life.

George Rebane

Yes, the belief that self-awareness was uniformly acquired by hominids thousands of years ago still persists. However, evidence to the contrary abounds, and is available and observable today in several primitive and mostly isolated cultures. All this supports the proposition that humans as a species acquiring self-awareness (or formally 'consciousness' in psychology-speak) is still a work-in-process - i.e. bicameral minds are still with us.


pvz 1127am

The autopilot behavior of the 737MAX wasn't an AI failure... it was a user interface issue. American pilots (meaning USA) had flown them thousands of hours at the time, with no problems.

Don Bessee

G @137 there was also the differential in airline training programs from country to country that made it worse.


George Rebane

Greg 137pm - Well, that should make it official, cf @1247pm ;-)


Google, Siri and Alexa respond to the 'Turing question',



"Howard Freidman, former CEO Aptela (Now Vonage Business) at Vonage Business
Answered September 29, 2019
Google Duplex supposedly fooled humans, but the demos were edited. The tech is used in Google Reserve, which explicitly avoids trying to fool people because of the Duplex backlash.

It’s not really hard. I’m on the board of a company that builds conversational agents, and while fooling people isn’t a goal, you can tell in recordings that people don’t know it’s a machine.

The DA’s you reference aren’t designed to be conversational. However, the platforms each of the vendors makes available are conversation capable—they can maintain context. Dialogflow (Google) and Lex (Amazon) make it relatively easy to build conversational agents. Watson too. Vocinity uses other ML models that we’ve found slightly better for our purposes, which is custom Enterprise assistants/agents.

Over the Internet, using neural voices like Google Wavenet and Amazon’s equivalent, it’s really not hard to fool people. Over the telephone it’s harder because the composition of the synthesized voices doesn’t play well with PSTN filter/compression."


gr 154pm

American pilots know how to turn off their autopilots.


"By Robot / February 21, 2019
In 2014 a couple of bots named Eugene Goostman and Cleverbot laid claim to passing the Turing test. Since then it’s been argued that the Turing test was never intended for bots, but rather to test seemingly intelligent AI. Alexa, Siri and Amazon’s Echo all seem to fail the Turing test. But Google’s Duplex looks likes it’s passing during Google’s 1/0 Developer conference in 2018. Google’s Duplex began rolling out on their Pixel phones in late 2018. After the conference, concerns were raised about the AI not properly identifying itself when making calls since it sounds so realistic. Google built in a disclosure to ease those concerns. It seems that the technology, WaveNet, that Google uses for Duplex could be the tipping point difference. It’s so humanlike and realistic that it created an outcry for Google to build in a disclosure that it’s a Robot making the call to avoid any false representation. Clearly, DeepMind’s WaveNet technology is realistic enough to fool humans, which by all appearances would be a Turning test pass."



"Aside from the fact that it isn’t clear what it means to understand something, AI-driven computer programs are not actually built to understand anything"


George Rebane

FYI - according to multi-10k-hour pilots I know, America’s pilot training has always been the major differentiating factor. So here we were looking for something above and beyond.

George Rebane

re passing the Turing test - please see the 2jan21 update above.

D 222pm - another fortunate misunderstanding. As machines with limited but considerable cognitive processing abilities, AI-driven computers are and have always been built to demonstrate understanding in specific knowledge domains. That is their raison d'etre, and to deny that is another demonstration of being unclear on the concept of smart and ever smarter machines. Today machines can out-perform human experts in their understanding of several domains, and that means 'understanding' in the same sense that we judge a human to understand something. There is no scientific concept of humans having an 'understanding center' somewhere in their brains, a functional collection of neurons that is missing in smart computers who therefore don't understand. All we can do with both humans and machines is to test their understanding of something, then conclude whether such understanding exists, and perhaps finally adjudicate who (today) understands better. And then we realize that tomorrow is another day.

The comments to this entry are closed.