« Sandbox – 22mar23 | Main | Trump Derangement Syndrome – 31mar23 »

29 March 2023

Comments

scenes

"In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit."

Well, hell no...although there's always a need for some parties to build a moat around their business model. Obviously, at least thus far, the barrier to entry on the work so far is low enough to make limitations kind of laughable, unless you've got a worldwide Turing Police looking over everyone's shoulder.

I really have enjoyed these videos

https://www.youtube.com/@ai-explained-

but had to laugh at this referenced doc.

https://moores.samaltman.com/

He starts off with a reasonable-to-argue point:

"My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today."

but...absolutely naturally of course...jibs immediately into this:

"We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. "

Notice the word 'design'. The word 'fairly'.

lol. Howza come every time there's a problem on the horizon, the answer is to put together some sort of soviet and get cracking on rules for everybody.

Gregory

"In the meantime, the well-intended cautionary letter released yesterday will not slow the pace of research one whit."

The thought that folks steering the development process will sacrifice their time to market is laughable.

The Estonian Fox

scenes, from your moores ref:

1.This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”

2. The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.

3. If we get both of these right, we can improve the standard of living for people more than we ever have before.

So all of the fast-food workers, lawyers, accountants & docs put out of work will do what? And where do I get the funds to pay for "pursuing the life I want"?

Quoting one of RR's participants from a while ago: I'll take "no they won't, no it won't, no we won't," for $800 Alex.

scenes

"The thought that folks steering the development process will sacrifice their time to market is laughable."

I'll say. There's money to be made for some companies, interesting research for others, military advantage for countries. Any disasters will probably come from unguessed causes as it's all long term weather prediction at this point.

It would be interesting to know what sort of money the Chinese are putting into purely home-grown GPUs (and whatever follows for machine learning). Weaponizing chip design and export probably will cost the US in the long run, just like weaponizing currencies. I expect the Chinese will have to reinvent the whole food chain but a combination of pride and profit will get them there.

"So all of the fast-food workers, lawyers, accountants & docs put out of work will do what?"

Drive down the hourly rates of the trades they try to move into? Dunno. There's no guarantee that changes in technology result in new fields to work in. To me, it's funny that so many areas viewed as uniquely human because we're-so-damned-smart turned out to not be that hard to duplicate.

I saw a cartoon showing a person using Chat-whatever to generate a report from a PowerPoint slide and the recipient using one to condense the report into a readable graphic. Maybe that's the future of work. Everyone will be in the HR department going to conferences and sending synthetically produced memos.

scenes

https://confusedbit.dev/posts/how_does_gpt_work/

for a 100,000 foot view.

scenes

Some grifters get the boot.

https://www.theverge.com/2023/3/13/23638823/microsoft-ethics-society-team-responsible-ai-layoffs

scenes

Not bad video.

"GPT-5 Rumors and Predictions - It's about to get real silly"
https://www.youtube.com/watch?v=TkxroMCmpDw

scenes

In the meantime, Joe Rogan AI Podcast.

https://www.youtube.com/watch?v=meu0CoYv3z8

https://www.youtube.com/watch?v=T20CtNuIqg8

scenes

Just thinking about the so-called 'alignment problem'.

"In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers' intended goals and interests." (wikipedia)

1. I think they'd got a lot further if the alignment experts at the AI efforts spent their time aligning their construction with 'truth' rather than 'stuff we think is really really good in modern political terms'.

2. Alignment issues have always existed with suprahuman organization. A corporation, government, club, has abilities beyond an individual and builds it's own goals and rewards over time. Perhaps simply looking at law as it's applied to groups is sufficient.

scenes

Small potatoes, but an interesting idea.

https://newsnotfound.com/

The comments to this entry are closed.