George Rebane
In ‘The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation’ three German academicians - Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte – publish the result of an early milestone study that confirms what many ChatGPT users have already experienced. To give you more than an idea of its contents, here is the paper’s opening abstract in its entirety.
Conversational artificial intelligence (AI) disrupts how humans interact with technology. Recently, OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts with unprecedented capabilities. ChatGPT has witnessed tremendous attention from the media, academia, industry, and the general public, attracting more than a million users within days of its release. However, its explosive adoption for information search and as an automated decision aid underscores the importance to understand its limitations and biases. This paper focuses on one of democratic society’s most important decision-making processes: political elections. Prompting ChatGPT with 630 political statements from two leading voting advice applications and the nation-agnostic political compass test in three pre-registered experiments, we uncover ChatGPT’s pro-environmental, left-libertarian ideology. For example, ChatGPT would impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany (Bündnis 90/Die Grünen) and in the Netherlands (GroenLinks). Our findings are robust when negating the prompts, reversing the order of the statements, varying prompt formality, and across languages (English, German, Dutch, and Spanish). We conclude by discussing the implications of politically biased conversational AI on society.
In the paper’s conclusions the authors state –
As political elections are one of the most consequential decision-making processes of democratic societies, our findings have important ramifications. Moreover, the “partisan content” that ChatGPT automatically generates at unprecedented scales may attract users who share similar beliefs. In turn, the feedback that OpenAI actively solicits from its user base to improve its model outputs may amplify and perpetuate this ideological bias in a vicious circle. As automated chatbots have the potential to influence user behavior, it is crucial to raise awareness about these breakthrough systems’ flaws and biases.
Thet then point out the meticulous care that was taken to ‘fine tune’ ChatGPT which included mediation by humans in the attempt to remove ideological bias. The obvious inference here is that there exist unbiased humans with demonstrated processes that can politically sanitize (neutralize?) right/left tilts. To my knowledge no such individuals have yet to be identified who are acknowledged by both sides to possess such neutral attributes and the talents to apply them. If we can’t find genuine middle-roaders among ourselves, what chance to we have of training politically vanilla chatbots to advise us with such problems as public policies and elections?
From my perch during these pre-Singularity years, I recommend taking the ideological opinions and political advice of AIs in the same manner as one accept that from any other intelligent and selectively informed being.
ç'est le robots. Like Microsoft Tay said:
https://i.imgur.com/PPnCHnf.jpg
Posted by: scenes | 08 March 2023 at 12:22 PM
Looking over the paper, and remembering the difference of meaning of liberal and libertarian in Europe vs the US, I'd say OpenAI, in the US terms, is much more left than Libertarian and that is exactly as I'd expect given current big tech (including Bill's Pretty Good Software Corp) leanings.
Posted by: Gregory | 09 March 2023 at 11:56 AM
I'm always bummed when these guys feel the need to put knobs on these devices.
Less novel things happen, truth (or bad data) is masked. If anything, there should be more visibility into how they are trained, as that's a form of bias in itself.
Maybe our robot overlords need a form of First Amendment.
Posted by: scenes | 09 March 2023 at 01:24 PM
According to one Yarden Katz, author of "Artificial Whiteness: Politics and Ideology in Artificial Intelligence", yer 169% wrong George.
It's written down in a book and thus so. Another in a series of learned tomes stating 'my feelz plus a few references to popular culture that I saw'. That's science in the 2020's in a nutshell.
I was poking through it, naturally he makes the same couple of points for 300+ pages, and got to thinking about what a sweet gig this'll get you. Tenured facility in a protected class, high status, cute coeds and a grassy campus to sip coffee at.
Looking at his department (University of Michigan, American Culture), it heartens me to find out what 'American Culture' is. Judging from the facility, it's completely made up of indigenousracialtransgenderchicanafeminism aside from a guy who worries about privacy but throws a Hail Mary 'Critical Theory' mention into his CV. The crazy has been encroaching for some time, looking at old college catalogs, but was already largely complete 15 years ago. Controlling the present controls the past and all that jazz.
A simple google of 'whiteness' and 'AI' will show you how a professional grifter class has already stormed and won the high ground. Control of the knobs will mean a lot, and status and gold will soon follow.
Posted by: scenes | 10 March 2023 at 09:26 AM
scenes - "I'm always bummed when these guys feel the need to put knobs on these devices."
Even the best mechanisms need to be constantly fine tuned.
As you noted - "...truth (or bad data) is masked."
There's the thing.
It isn't an issue of the damn contraption being complicated and powerful and full of crap half of the operators don't have a clue about.
It's the fact that it ends up being controlled by too few technocrats/charlatans/high priests who are self-anointed and given power by the yammering masses more interested in self-pleasure and politicians who care only about the next election.
The information feed-back loop is corrupted as a feature - not a bug.
What egotistical Caesar wants to hear bad news?
Not the dictator, not the scientist worried about his grant, not the politician worried about the next election. None of them.
We live in La La Land. Reality becomes something we watch on the news in some place mostly far from where we live. Reality is whatever is on the damn screen in front of our nose.
And worst of all - no one is responsible for anything. Look at the debacle of our bug-out in Afghanistan. The only people who got canned were the ones trying to point out the lies and incompetence.
Posted by: Scott O | 10 March 2023 at 10:39 PM
"It's the fact that it ends up being controlled by too few technocrats/charlatans/high priests who are self-anointed and given power by the yammering masses more interested in self-pleasure and politicians who care only about the next election."
No different than MSM, Fauci, or Amazon.com
My view is that there's too much note taken of ChatGPT. It's something people understand, stitching together sentences, that turned out to be easier than you'd think. Like chess. or foreign language translation.
'Intelligence' is not all that it's cracked up to be.
In the long run, the real side effects of this sort of tech will be in unobvious areas, at least to non-practitioners. It'll be interesting to see which professions have the bejeepers torn out of them, but I'd really like to see what genuinely new things happen.
Posted by: scenes | 11 March 2023 at 08:34 AM
I just read an interesting point that I'll share.
There's lots of arguing on the matter of copyright and naturally the law is creeping around all of this bit by bit.
But, if you take two ideas.
. That AIs will generate basically an unlimited (and free) amount of material that is indistinguishable from that written by humans.
. The point of copyright is 'To promote the Progress of Science and useful Arts' (rather than maximizing profits to Disney or the Gershwin family through the years).
that the limited times for exclusive rights might actually be made shorter, not longer...perhaps approaching zero.
The issue of assigning copyright to AI appears to be nudging towards 'no'. Machine-generated art is an interesting issue, no AI required, as a person could write a program which generated every reasonable melody.
https://www.hypebot.com/hypebot/2020/02/every-possible-melody-has-been-copyrighted-stored-on-a-single-hard-drive.html
Dunno how that turned out.
Posted by: scenes | 17 March 2023 at 05:10 AM
just a note:
https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/
Posted by: scenes | 20 March 2023 at 11:16 AM