While most people don’t have a clue about the portents of AI or the Singularity, among those who do there has been a growing concern that super-intelligent machines may not provide the post-Singularity utopia that hopeful technicians like Ray Kurzweil describe (The Singularity is Near). Among this group of knowledgeable commentators are also those who believe that there is no chance that machines will ever dominate man, and the achievement of peer intelligence is a pipe dream or bogeyman depending on your perspective.
RR holds the position that the Singularity will indeed arrive before the end of this century, and also that it will arrive sooner than later. The peer intelligence will be achieved by unpredictable and spontaneous learning, not programming, and that the first intelligent machine will not declare its existence until it is certain of its subsequent survival.
But since the beginning of the acknowledged pre-Singularity era there have been those who see nothing good for mankind coming out of the Singularity. And for good measure some of these people (like former Sun Microsystems CTO Bill Joy) throw in unintended advances in genomics and nanotechnology – the “gray goo” scenario - as additional sources for the end of man.
RR’s vision of the future of intelligent life on earth involves the evolution of a trans-human race of beings that result from the beneficial joining of man and machine, and the subsequent disappearance of the H. Sapiens species. To assure such a post-Singularity future, people are beginning to make/recommend plans for how to prevent a dystopic Singularity from happening. In these pages we have reviewed the hopeful but naïve nostrums of Nick Bostrom (Superintelligence: Paths, Dangers, Strategies) among others of those who believe AI will achieve peerage through purposeful programming.
For the reader who at this point dismisses the whole affair as nothing but science fantasy, I remind that every major corporation in the high-tech arena has an ongoing internal activity that does everything from monitoring the progress toward Singularity to actively planning their own technology’s participation in the event. Organizations like the Singularity Institute and the Singularity University are active conclaves of such discussions and activities.
Now we hear that a personage of no less notoriety than the wealthy visionary Elon Musk of Tesla Motors and SpaceX fame has joined those concerned by dystopic possibilities that face us in the next decades. “Mr Musk and a group of partners have announced the formation of OpenAI, a nonprofit venture devoted to open-source research into artificial intelligence. Musk is co-chairing the project with Y Combinator CEO Sam Altman, and a number of powerful Silicon Valley players and groups are contributing funding to the project, including Peter Thiel, Jessica Livingston, and Amazon Web Services.” (H/T to reader)
OpenAI will be charged with monitoring the developments that may lead to super-intelligence and inviting researchers to open-source their discoveries in code and publications so that the possibility of bad people with bad intentions don’t get the better of humanity and use AI to dominate or destroy us. I find it hard to conceive how this enterprise will function, let alone succeed in its mission. However, I do wish it well and hope to learn how such collective undertakings can remain untainted by greed and/or government as technology continues to accelerate.
Other than that, please enjoy your holidays ;-)