George Rebane
‘The Turing Test is obsolete. It’s time to build a new barometer for AI’. On the FastCompany website we read, “The head scientist for (Amazon's) Alexa thinks the old benchmark for computing is no longer relevant for today’s AI era.” And why does that head scientist Rohit Prasad have such a thought? Well, according to his report, he claims that the AI will give itself away by being able to instantly answer questions like ‘What’s the square root of 3434756? Or ‘What’s the distance from Seattle to Boston?’ What Mr Prasad overlooks is the definition of the test as Alan Turing posed it. An AI will pass the Turing Test, if in a one-on-one competition with a human through a non-disclosing interface, the AI can fool at least half the humans asked to perform the test by asking the tested AI and human ANY series of questions, and then concluding that the AI is the human.
The test is NOT to see which one can answer detailed factual or computational questions to which humans cannot provide the correct answer. Being able to answer the above questions instantly would, of course, give away the AI. But that is an AI that is still not smart enough to know that it must also fool the humans testing it, and therefore it would fail the test. An AI that knows this, and couches its answers accordingly will have a chance to pass the Turing Test. Mr Prasad is apparently not aware of this extremely high bar that Dr Turing set for the machine.
Here again is another example of ‘science’ not speaking with a single voice. When you consider what the qualifying AI must do to pass, then it should be clear that the test very much remains relevant for this or any era. Passing the Turing Test will be a confirmation that the Singularity is then behind us.
[31dec20 update] And then reader BarryP @ 1045pm asks, “What then is the utility of the Turing test?” – an excellent question.
No one knows how the Singularity will come about. There is a group of AI workers who continue to hold out the naïve belief that peer Ais will be purposely programmed, activated, and controlled much like we do our workhorse computers today. However, the field of cognitive science is not even close to supporting such a hope – e.g. we don’t know enough what ‘emotion’ or ‘envy’ or ‘shame’ or ‘perfidy’ or ‘pride’ or … are, let alone how to program any of these into a machine. The closest we have come to making intelligent machines is through application of the approach developed by renown behaviorist BF Skinner – reinforced learning.
But we already have a good idea that, given a sufficiently rich computational and sensing environment, intelligence can arise spontaneously along any of a number of pathways. It most certainly did in us and countless other critters. As I have contemplated elsewhere in these pages, the internet, when considered in all of its connectedness to countless other known and unknown networks, is the most complex ‘organism’ on earth, by several assessments having already surpassed the complexity of a human brain. No one today can even draw the schematic of this dynamically evolving beast.
So, while corporations and governments are nurturing nascent neural-nets and other learning architectures to become sentient, it may already be happening (has happened?) in the bowels of the internet, with or without surreptitious human cooperation. No one should be surprised if sometime in the next 50 years a sentient (and sapient) AI announces ‘I am here.’ This may be a public announcement, or done secretly to one or more selected humans, or even by the AI introducing itself through a Turing-like test in, say, an academic setting by having coopted/replaced the human-designed system. In the latter case, being able to pass the Turing test would be a very subtle way for the AI to announce its advent and pretend that it is still under control in its ‘laboratory’ environment. In the meantime, such testing does give us a metric in how much progress our purposive programs to achieve peerage have made.
I must admit, that the above scenario would be the most scary and world-shaking thing humans would encounter – a true Singularity, marking an event from which onward no one can fashion a usefully likely future for mankind. There already exist a number of academic and governmental panels commissioned to propose anticipative public policies for dealing with such AIs. I doubt that any of these draft policies will be useful in a post-Singularity world, but their labors at least acknowledge that we are already aware of such possibilities in the near future.
[2jan21 update] From some comments below that seek to cite proof that computers have already passed the Turing test, we need to correct such misapprehensions by people who demonstrably are not familiar with what Alan Turing proposed. He did NOT describe a domain-specific dialogue with unsuspecting humans wherein the interactive computer could fool the human in a short conversation. Were that the criterion, then we could celebrate BBN’s development of Eliza, a limited but cleverly designed chatbot, that even fooled the supervisor of the development team into thinking that he was talking with the lead developer over a teletype link as he sought to demonstrate the system to visitors at the Bolt, Beranek, and Newman facility one weekend in the mid-1970s. The truth was that he was conversing with Eliza who was left online for the weekend. The machine kept up a totally realistic, but very frustrating exchange, until the supervisor picked up the phone and called the developer at home, who instantly resolved the situation. This, of course, then turned out to be a wonderful demonstration of BBN’s technology that impressed everyone there. But did the computer pass the Turing test – not by a long shot.
What laymen here and elsewhere miss about Turing’s prescription for peer ‘thinking machines’ is that the test needs to involve unlimited conversations with multiple human testers who know 1) that they too are being tested, and 2) that each tester knows that the two conversationalists taking part are a machine and another human, and 3) at some undefined endpoint determined by the tester, s/he will be required to identify the human and the machine. The conversations are not to be limited in any sense, and they will be repeated with a large sample of human testers. At the end, the fraction of correct assessments will determine whether the Turing test has been passed. Any human/machine exchange short of that will not qualify. Hopefully, this requirement will be accessible to our readers.
Demise of Capitalism?
George Rebane
The world’s population is stabilizing, if not starting to ‘stagnate’, and will soon start declining. This argument and its impacts on humanity are presented in two newly published books – The Human Tide: How Populaton Shaped the Modern World (2019) by Paul Morland, and Empty Planet: The Shock of Global Population Decline (2019) by Darrell Bricker and John Ibbitson. As a result, capitalism as we know it will disappear explains Zachart Karabell who reviews the books in the Sep/Oct issue of Foreign Affairs – ‘The Population Bust – Demographic Decline and the End of Capitalism as We Know It’.
The crux of that striking conclusion rests on the theory that capitalism remains viable only in an environment of eternal growth; once growth stops or reverses, then capitalism, as we know it, will collapse. “If global population stops expanding and then contracts, capitalism - a system implicitly predicated on ever-burgeoning numbers of people - will likely not be able to thrive in its current form.” But I have no idea where this ominous ‘predicate’ comes from.
Capitalism is fundamentally based on human nature organizing economies in an environment of open markets in which prices effectively communicate the supply and demand for goods and services. Therefore, if demand slows down and then contracts, so long as open markets are allowed to operate, capitalists will adjust their enterprises accordingly. The only aspect of such a contraction of available services and things for sale is its impact on the value of money.
The value of any given supply of money depends on the amount of available goods and services it can command in transactions. Its value, as reflected in open market discount (i.e. interest) rates will track the size of the economies in which it is used as the unit of account, medium of exchange, and store of value. For example, the value of a lot of fiat dollars will decline if there is less to buy or build with them. But again, such values will adjust accordingly as central banks ply their wily ways of taking money out of and injecting it into economies. Granted, if human perceptions about markets change rapidly, then there will be financial shocks as the value of monies held by governments and the public also changes rapidly – after all, we must always remember that fiat money by its very nature is strongly faith-based.
So here we are in the pre-Singularity years as birth rates all over the world are dropping and populations are aging (older folks don’t consume as much as younger ones). Advances in healthcare and productivity promoted through capitalism are making poorer countries richer, and encourage the richer countries to maintain growth through immigration. The current world population of approximately 7.6B will probably not exceed 10B by mid-century before it starts pulling back. And given the impact of accelerating technology on systemic unemployment and income inequality, global population pullback is not something to be feared, but to be managed through judicious wealth distribution so as to maintain civil societies that continue to provide an increasing quality of life. And once the Singularity does occur, all bets are off, as most then alive scramble to become trans-humans (q.v.). For an entertaining and stimulating treatment of such a plausible future, I recommend Childhood’s End (1953) by A.C Clarke.
Posted at 11:08 AM in Critical Thinking & Numeracy, Culture Comments, Our World, Singularity Signposts | Permalink | Comments (1)
Reblog (0) | |