George Rebane
Led by technical, industrial, and political luminaries like Elon Musk, Yoshua Bengio, Steve Wozniak, et al have published “Pause Giant AI Experiments: An Open Letter” as a warning to civilization about the potential harmful impact that recently released AI developments portend. (more here) Specifically, the authors “call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
AI research company OpenAI recently released ChatGPT, a large language model of a very sophisticated neural network that can understand spoken/written natural language, search its enormous training dataset for pattern-matching answers, and compose a well-formed response, that, while not always factually accurate, is sufficiently compelling to influence its human interrogator. ChatGPT, based on the GPT-4 LLM, has now been embraced by a number of large and small firms to study, modify, and see how they may integrate it into their daily operations.
In the interval OpenAI is developing the next advance called GPT-5, and other companies like Google are busy bringing out their own LLM general artificial intelligence systems. There is a lot of money to be made and power(s) to be brokered by people who will command important knowledge and process domains in sectors like healthcare, pharmaceuticals, genomics, law, energy, quantum computing, algorithmics, and, yes, even physics in general. And in one form or another LLMs and their descendants will ensconce systemic unemployment - legions of the terminally disgruntled - in the workforces of all developed countries.
At this point some may ask whether the Singularity has already arrived since we now have existential reasons to fear the seemingly ‘intelligent critters’ that have already crawled out of their cribs as having peer intelligence with us. Moreover, these systems have also demonstrated that they perform better than the best humans in an ever-growing list of endeavors that were dominated by domain experts. Today many of these LLMs can even fool humans to the extent of passing the Turing Test (q.v.).
So, as we integrate these AIs into the many systems (energy, transportation, education, law, governance, …) that make up our civilizations’ grid, we are concerned of their being able to formulate and achieve performance metrics that are dystopic to humankind – in short, they may decide to treat us as vermin and use their grid-embedded powers to be rid of us. We have reviewed many of such scenarios in this ‘Singularity Signposts’ category of RR over the last 16 years, and will continue to do so as the pace of AI development accelerates and systems yet to be imagined come to the fore.
In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit. The genie is out of the bottle, too many agents and agencies have easy access to the technology, and the rewards are too great to just bring things to a halt for while as we puzzle about notions like machine consciousness, morals, generalized intelligence, ethics, …, and so on. From my perch, I don’t see how from this point on will we even understand the fundamental paradigm that makes them come alive. Yes, alive – here I am taking the Skinnerian behaviorist definition of life – they are alive to the extent of what they according to their own initiative will do. Exciting times ahead.
"In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit."
Well, hell no...although there's always a need for some parties to build a moat around their business model. Obviously, at least thus far, the barrier to entry on the work so far is low enough to make limitations kind of laughable, unless you've got a worldwide Turing Police looking over everyone's shoulder.
I really have enjoyed these videos
https://www.youtube.com/@ai-explained-
but had to laugh at this referenced doc.
https://moores.samaltman.com/
He starts off with a reasonable-to-argue point:
"My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today."
but...absolutely naturally of course...jibs immediately into this:
"We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. "
Notice the word 'design'. The word 'fairly'.
lol. Howza come every time there's a problem on the horizon, the answer is to put together some sort of soviet and get cracking on rules for everybody.
Posted by: scenes | 30 March 2023 at 12:36 PM
"In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit."
The thought that folks steering the development process will sacrifice their time to market is laughable.
Posted by: Gregory | 30 March 2023 at 12:57 PM
scenes, from your moores ref:
1.This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”
2. The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.
3. If we get both of these right, we can improve the standard of living for people more than we ever have before.
So all of the fast-food workers, lawyers, accountants & docs put out of work will do what? And where do I get the funds to pay for "pursuing the life I want"?
Quoting one of RR's participants from a while ago: I'll take "no they won't, no it won't, no we won't," for $800 Alex.
Posted by: The Estonian Fox | 31 March 2023 at 05:01 AM
"The thought that folks steering the development process will sacrifice their time to market is laughable."
I'll say. There's money to be made for some companies, interesting research for others, military advantage for countries. Any disasters will probably come from unguessed causes as it's all long term weather prediction at this point.
It would be interesting to know what sort of money the Chinese are putting into purely home-grown GPUs (and whatever follows for machine learning). Weaponizing chip design and export probably will cost the US in the long run, just like weaponizing currencies. I expect the Chinese will have to reinvent the whole food chain but a combination of pride and profit will get them there.
"So all of the fast-food workers, lawyers, accountants & docs put out of work will do what?"
Drive down the hourly rates of the trades they try to move into? Dunno. There's no guarantee that changes in technology result in new fields to work in. To me, it's funny that so many areas viewed as uniquely human because we're-so-damned-smart turned out to not be that hard to duplicate.
I saw a cartoon showing a person using Chat-whatever to generate a report from a PowerPoint slide and the recipient using one to condense the report into a readable graphic. Maybe that's the future of work. Everyone will be in the HR department going to conferences and sending synthetically produced memos.
Posted by: scenes | 31 March 2023 at 08:13 AM
https://confusedbit.dev/posts/how_does_gpt_work/
for a 100,000 foot view.
Posted by: scenes | 06 April 2023 at 08:20 AM
Some grifters get the boot.
https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs
Posted by: scenes | 08 April 2023 at 01:55 PM
Not bad video.
"GPT-5 Rumors and Predictions - It's about to get real silly"
https://www.youtube.com/watch?v=TkxroMCmpDw
Posted by: scenes | 13 April 2023 at 08:13 AM
In the meantime, Joe Rogan AI Podcast.
https://www.youtube.com/watch?v=meu0CoYv3z8
https://www.youtube.com/watch?v=T20CtNuIqg8
Posted by: scenes | 13 April 2023 at 01:30 PM
Just thinking about the so-called 'alignment problem'.
"In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers' intended goals and interests." (wikipedia)
1. I think they'd got a lot further if the alignment experts at the AI efforts spent their time aligning their construction with 'truth' rather than 'stuff we think is really really good in modern political terms'.
2. Alignment issues have always existed with suprahuman organization. A corporation, government, club, has abilities beyond an individual and builds it's own goals and rewards over time. Perhaps simply looking at law as it's applied to groups is sufficient.
Posted by: scenes | 21 April 2023 at 05:13 PM
Small potatoes, but an interesting idea.
https://newsnotfound.com/
Posted by: scenes | 22 April 2023 at 07:30 AM