George Rebane
Led by technical, industrial, and political luminaries like Elon Musk, Yoshua Bengio, Steve Wozniak, et al have published “Pause Giant AI Experiments: An Open Letter” as a warning to civilization about the potential harmful impact that recently released AI developments portend. (more here) Specifically, the authors “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
AI research company OpenAI recently released ChatGPT, a large language model of a very sophisticated neural network that can understand spoken/written natural language, search its enormous training dataset for pattern-matching answers, and compose a well-formed response, that, while not always factually accurate, is sufficiently compelling to influence its human interrogator. ChatGPT, based on the GPT-4 LLM, has now been embraced by a number of large and small firms to study, modify, and see how they may integrate it into their daily operations.
In the interval OpenAI is developing the next advance called GPT-5, and other companies like Google are busy bringing out their own LLM general artificial intelligence systems. There is a lot of money to be made and power(s) to be brokered by people who will command important knowledge and process domains in sectors like healthcare, pharmaceuticals, genomics, law, energy, quantum computing, algorithmics, and, yes, even physics in general. And in one form or another LLMs and their descendants will ensconce systemic unemployment - legions of the terminally disgruntled - in the workforces of all developed countries.
At this point some may ask whether the Singularity has already arrived since we now have existential reasons to fear the seemingly ‘intelligent critters’ that have already crawled out of their cribs as having peer intelligence with us. Moreover, these systems have also demonstrated that they perform better than the best humans in an ever-growing list of endeavors that were dominated by domain experts. Today many of these LLMs can even fool humans to the extent of passing the Turing Test (q.v.).
So, as we integrate these AIs into the many systems (energy, transportation, education, law, governance, …) that make up our civilizations’ grid, we are concerned of their being able to formulate and achieve performance metrics that are dystopic to humankind – in short, they may decide to treat us as vermin and use their grid-embedded powers to be rid of us. We have reviewed many of such scenarios in this ‘Singularity Signposts’ category of RR over the last 16 years, and will continue to do so as the pace of AI development accelerates and systems yet to be imagined come to the fore.
In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit. The genie is out of the bottle, too many agents and agencies have easy access to the technology, and the rewards are too great to just bring things to a halt for while as we puzzle about notions like machine consciousness, morals, generalized intelligence, ethics, …, and so on. From my perch, I don’t see how from this point on will we even understand the fundamental paradigm that makes them come alive. Yes, alive – here I am taking the Skinnerian behaviorist definition of life – they are alive to the extent of what they according to their own initiative will do. Exciting times ahead.
Sandbox – 22mar23
[Happy Spring! If Trump is indicted and his trial starts going badly toward conviction, would that induce other Repubs not to throw their hats in the ring? If so, Trump would be the hands down favorite for Republican nominee. So if the Dems really want to run against Trump, would they not, as a tactic, want throw their case against Trump under the bus? gjr]
Posted at 11:28 AM in Comment Sandbox | Permalink | Comments (174)
Reblog (0) | |