« Trump’s November Chances – 11aug24 update | Main | Scattershots – 16aug24 (updated) »

15 August 2024

Comments

scenes

The really interesting development would be to nudge humans out of the loop of AI design in both hardware and software. We'll see.

I have to admit that the AI stuff I tend to run into seems too anthropocentric. Using humans as a model for definition and aim. Perhaps people actually in the field are free from that.

In any case, in the short term, I'd say that the rubber will really hit the road as these giant investments are used simply as tools rather than in the production of any kind of entity. It all locks in nicely with universal surveillance and the need to parse data from inexpensive collection of same. Betcha that Microsoft has more than a little money flowing in from the gubmint to burn along with their own surveillance advertising biz.

Greater than 0% chance that the current regimes, both government and commercial, become permanent when this stuff hits some critical goodness.

Scott O

There's an interesting article on AI in the Guardian (extreme MAGA right wing news org) about how AI image generators need to have a built in nanny feature for some reason.
"Grok’s new feature, which is currently limited to paid subscribers of X, led to a flood of bizarre, offensive AI-generated images of political figures and celebrities on the social network formerly known as Twitter."
"Led to..."
A human ordered those bizarre and offensive images, but Grok gets the blame. Image manipulation software that can produce images of the same exact sort have been around for quite a while and I can't recall any one blaming the tools used.
Now - firearms have been blamed for their misuse by humans but now we're getting into really murky waters here with AI. If we're going to start affixing blame to an AI engine (or its supplier or its creator) for doing naughty things according to the Peck-N-Sniffs, where does it stop? Taxpayer money paid for Piss Christ - but we're supposed to be upset because Grok created an image of Muhammad holding a bomb because a human told it to? If some one draws a sketch using a pencil and paper that some one gets offended by - does Eagle pencils and Hammermill paper products get sued?
Do hammers at the hardware store come with an AI function to make sure I don't bash some one's head in?
A lot of people are worried about AI taking over the world but it seems the same people want to transfer responsibility of actions to AI. If AI has the responsibility, then AI has the right to dictate its own actions and humans have conceded control AI.
No thanks.
If you don't want to see Micky Mouse saluting Hitler then talk to the person that told Grok to produce that image.
https://www.theguardian.com/technology/article/2024/aug/14/musk-ai-chatbot-images-grok-x

scenes

" about how AI image generators need to have a built in nanny feature for some reason."

That's been on the table for some time. Adobe for one requires access to your work just in case there's something bad there. It's for the children.

Honestly, it's not much of a jump for that to expand to any sort of word processing. Imagine a Cat Lady Reddit moderator built in to your text editor, it's really not much different from image analysis for badthinking.

That to me is the bummer with all this business with 'AI'. It'll be amusing to watch it be used and abused by nearly-useless corporate drones using Microsoft Office, and there'll be the occasional wins in ASIC design or materials science, but the opportunities for control of the population are already there. Bossware for everyone.

Naturally, since large software projects have tended extremely liberal (and thus controlling) over the last couple of decades, the people working on AI commercialization, OS and OS-adjacent design, communications infrastructure, are happy to push in their political and personal kinks. Even open source is essentially opaque due to complexity and size, and the Decline of the West keeps on keepin' on. Squaring the circle of teaching sexual fetishes to children and unlimited 3rd world immigration is a tricky thing, but someone has to do it.

The Estonian Fox

@iruletheworldmo forgot to mention his 2 brothers:

@iruletheworldlarry
@iruletheworldcurly.

Remember, "the Ramans do things in threes" - Arthur C. Clarke, "Rendezvous with Rama".

And be sure to read the newest Isaac Asimov novel "AI, Robot". It seems there can be life after death.

scenes

re: EF@4:26AM

An account called '@iruletheworldmo' does truly look like the place for your facts and whatnot.

source: trust me bro

My main takeaway from all of this 'AI' investment thus far is that people mostly think, do, say, in tropes.

scenes

...and who better to regulate AI development than the local progressive's favorite gubmint guy, Scott Weiner.

https://reason.com/2024/08/16/california-lawmakers-face-backlash-over-doomsday-driven-ai-bill

Maybe he just likes designing bondage gear for everyone, including machine intelligence.

scenes

George, if you happen to read this.

Could you (or perhaps you could ask your SIL) explain just how these general chatbots LLMs are trained to do goodthinking?

I spent an amusing half hour or so with Google Gemini trying to get it to say things it simply won't say that are obviously biases built in by developers.

It would be impractical to build some sort of giant pachinko machine of software on the backend (I would think) to keep badthinking from coming out, but it didn't sound like manipulation of training data. Maybe appending some giant wad of text to anything I typed in? Dunno.

I should pay for Grok and see how it works, but I'm too cheap.

Rolandalong

@scenes: the training of models for goodthinking is generally called safety training in AI. This is not my area, but I'll try to give a quick overview. Safety is a rapid evolving area that pervades both training and deployment of models: data collection, where unsafe text is filtered or cleaned; post-training, where models are aligned to human-annotated conversations including examples of unsafe questions and answers; and deployment, where safety checking tools can intercede when unsafe questions or model responses are identified. It aims to align with human morals and generate output that is safe and responsible. It also pays special attention to cybersecurity, chemical/biological weapons, and child safety. For an example of SOTA techniques, you can see section 5.4 of this paper describing how the new Llama 3.1 series of models were trained: https://scontent-sea1-1.xx.fbcdn.net/v/t39.2365-6/453304228_1160109801904614_7143520450792086005_n.pdf?_nc_cat=108&ccb=1-7&_nc_sid=3c67a6&_nc_ohc=22QmL5CfN0AQ7kNvgEI_J53&_nc_ht=scontent-sea1-1.xx&oh=00_AYDHIgynO8GJU8sVGT3sLGqBxl7bVJ2CbSZEqMM9He19Jw&oe=66D516C7. It is quite involved.

I *do* recommend that you pay for Grok for a month and try it out. The new Grok 2 model has just been released, so now it the perfect time to sample its less-filtered nature and its status on the road to being a maximum truth-seeking AI (was initially called "TruthGPT" by Elon). Or send me your favorite question and I'll send you the response.

scenes

thanks @11:31AM

It's a thing I mean to look into, the explanations always seem rather unsatisfying. Usually some combination of Red Teaming, curated datasets, God knows what other HITL sorts of things, lots of talk about 'fine tuning' with no implementation details. From the outside, these things look rather ad hoc with a giant batch of if-then-elses on the boundaries with an attendant priesthood.

It could be that giving the beast a curiosity objective and an Ethernet connection will make all the work for nothing.

Implementing a toy version is probably the only way to start to understand, as usual.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Blog powered by Typepad