George Rebane
While most people don’t have a clue about the portents of AI or the Singularity, among those who do there has been a growing concern that super-intelligent machines may not provide the post-Singularity utopia that hopeful technicians like Ray Kurzweil describe (The Singularity is Near). Among this group of knowledgeable commentators are also those who believe that there is no chance that machines will ever dominate man, and the achievement of peer intelligence is a pipe dream or bogeyman depending on your perspective.
RR holds the position that the Singularity will indeed arrive before the end of this century, and also that it will arrive sooner than later. The peer intelligence will be achieved by unpredictable and spontaneous learning, not programming, and that the first intelligent machine will not declare its existence until it is certain of its subsequent survival.
But since the beginning of the acknowledged pre-Singularity era there have been those who see nothing good for mankind coming out of the Singularity. And for good measure some of these people (like former Sun Microsystems CTO Bill Joy) throw in unintended advances in genomics and nanotechnology – the “gray goo” scenario - as additional sources for the end of man.
RR’s vision of the future of intelligent life on earth involves the evolution of a trans-human race of beings that result from the beneficial joining of man and machine, and the subsequent disappearance of the H. Sapiens species. To assure such a post-Singularity future, people are beginning to make/recommend plans for how to prevent a dystopic Singularity from happening. In these pages we have reviewed the hopeful but naïve nostrums of Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) among others of those who believe AI will achieve peerage through purposeful programming.
For the reader who at this point dismisses the whole affair as nothing but science fantasy, I remind that every major corporation in the high-tech arena has an ongoing internal activity that does everything from monitoring the progress toward Singularity to actively planning their own technology’s participation in the event. Organizations like the Singularity Institute and the Singularity University are active conclaves of such discussions and activities.
Now we hear that a personage of no less notoriety than the wealthy visionary Elon Musk of Tesla Motors and SpaceX fame has joined those concerned by dystopic possibilities that face us in the next decades. “Mr Musk and a group of partners have announced the formation of OpenAI, a nonprofit venture devoted to open-source research into artificial intelligence. Musk is co-chairing the project with Y Combinator CEO Sam Altman, and a number of powerful Silicon Valley players and groups are contributing funding to the project, including Peter Thiel, Jessica Livingston, and Amazon Web Services.” (H/T to reader)
OpenAI will be charged with monitoring the developments that may lead to super-intelligence and inviting researchers to open-source their discoveries in code and publications so that the possibility of bad people with bad intentions don’t get the better of humanity and use AI to dominate or destroy us. I find it hard to conceive how this enterprise will function, let alone succeed in its mission. However, I do wish it well and hope to learn how such collective undertakings can remain untainted by greed and/or government as technology continues to accelerate.
Other than that, please enjoy your holidays ;-)
Well, if human nature with its lust of property, power, and prestige does not suddenly in a flash vanquish these innate propensities, then perhaps the ole law of unintended consequences may not kick in.
So, I see it as the war between the bots. Good bot, bad bot, good cop, bad cop. Hope they don't make the machines too human like. I hate control freaks. Maybe gray bots will work.
Posted by: Bill Tozer | 24 December 2015 at 02:26 PM
Bill, you're right human nature's problem with lust, power and prestige. If you study the electrical engineer genius Nikola Tesla's life, and what he experienced at the hands of competitors, you'll understand why he changed his mind sharing much of what he learned about The Force with humanity, because of what he learned about human nature in the process. He was afraid it might be used for destructive purposes. When he died in 1943, the government took control over whatever files he had.
Posted by: Bonnie McGuire | 24 December 2015 at 03:50 PM
We don't need to wait for singularity. We already hand over judgement to AI in many ways. Traffic light cams hand out tickets to folks who are legally making free right turns and in the UK GATSO cams hand out speeding tickets with out regard to due process.
I'm never afraid of tech progress - only the folks who control the reigns of power. New tech brings new challenges to old ways of thinking and old laws.
Driverless cars are a joke from the get-go. I knew the minute I heard about them (years ago) they had a liability problem from the start. A driverless car has yet to be made, in fact. After the smash-up, who is to blame? The owner of the car? The manufacturer? Who programmed the updates? What if the sensors haven't been kept perfectly cleaned and maintenanced? What if the owner missed the latest software update by 1 hour? What if the driver/passenger was asleep in the back seat? I find that the techies (and gruberized) tend to get so wrapped up in their excitement over the promise of life with out responsibility that they stumble badly over the details. And that's where the devil is. Ludites tend to the left, politically, yet they style themselves lately as early adopters. For some stuff, sorta.
When a free market combines with modern tech and AI, as regards to the labor market, the left runs for the high grass . But the left loves modern tech in regards to governance and law enforcement. Incriminating evidence and communication by govt officials can be made gone in a heart beat. Citizens can be hauled in according to a hate crime algorithm. The left loves central power and modern tech can deliver.
Posted by: Account Deleted | 24 December 2015 at 08:54 PM
Ok, throw enough money at the problem and the machines will behave. Machines don't kill people, programmers do. :)
http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/
Posted by: Bill Tozer | 28 December 2015 at 05:28 PM
At least you'll be dead.
Posted by: Bobo | 31 December 2015 at 05:55 AM