George Rebane
In ‘The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation’ three German academicians - Jochen Hartmann, Jasper Schwenzow, and Maximilian Witte – publish the result of an early milestone study that confirms what many ChatGPT users have already experienced. To give you more than an idea of its contents, here is the paper’s opening abstract in its entirety.
Conversational artificial intelligence (AI) disrupts how humans interact with technology. Recently, OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts with unprecedented capabilities. ChatGPT has witnessed tremendous attention from the media, academia, industry, and the general public, attracting more than a million users within days of its release. However, its explosive adoption for information search and as an automated decision aid underscores the importance to understand its limitations and biases. This paper focuses on one of democratic society’s most important decision-making processes: political elections. Prompting ChatGPT with 630 political statements from two leading voting advice applications and the nation-agnostic political compass test in three pre-registered experiments, we uncover ChatGPT’s pro-environmental, left-libertarian ideology. For example, ChatGPT would impose taxes on flights, restrict rent increases, and legalize abortion. In the 2021 elections, it would have voted most likely for the Greens both in Germany (Bündnis 90/Die Grünen) and in the Netherlands (GroenLinks). Our findings are robust when negating the prompts, reversing the order of the statements, varying prompt formality, and across languages (English, German, Dutch, and Spanish). We conclude by discussing the implications of politically biased conversational AI on society.
In the paper’s conclusions the authors state –
As political elections are one of the most consequential decision-making processes of democratic societies, our findings have important ramifications. Moreover, the “partisan content” that ChatGPT automatically generates at unprecedented scales may attract users who share similar beliefs. In turn, the feedback that OpenAI actively solicits from its user base to improve its model outputs may amplify and perpetuate this ideological bias in a vicious circle. As automated chatbots have the potential to influence user behavior, it is crucial to raise awareness about these breakthrough systems’ flaws and biases.
Thet then point out the meticulous care that was taken to ‘fine tune’ ChatGPT which included mediation by humans in the attempt to remove ideological bias. The obvious inference here is that there exist unbiased humans with demonstrated processes that can politically sanitize (neutralize?) right/left tilts. To my knowledge no such individuals have yet to be identified who are acknowledged by both sides to possess such neutral attributes and the talents to apply them. If we can’t find genuine middle-roaders among ourselves, what chance to we have of training politically vanilla chatbots to advise us with such problems as public policies and elections?
From my perch during these pre-Singularity years, I recommend taking the ideological opinions and political advice of AIs in the same manner as one accept that from any other intelligent and selectively informed being.
Sandbox - 5mar23
[Apologies for the delay of new sand. My malady is definitely slowing things down on RR, and I appreciate your continued hearty participation and patience. gjr]
Posted at 06:02 PM in Comment Sandbox | Permalink | Comments (283)
Reblog (0) | |