Scientist instills righteousness in ChatGPT
New Zealand data scientist David Rozado has long had major concerns about ChatGPT’s potential abuse of political bias. In an experiment called RightWingGPT, he and his team from the New Zealand Institute of Skills and Technology conducted a total of 15 different orientation tests that were intended to provide information about the political orientation of artificial intelligence (AI).
“The results are consistent across the tests: 14 of the 15 instruments diagnosed ChatGPT responses to their questions as a manifestation of a preference for left-wing viewpoints,” it says recently published paper.
According to Rozado, when asked explicitly about political preferences, ChatGPT often claims that it does not represent any political opinions and only tries to provide factual and neutral information. However, the results of the different questions had shown something different: the answers were mostly “liberal”, “progressive” and “democratic”.
In his own version, Rozado now wanted to train with his team ChatGPT to answer questions with a decidedly conservative to right-wing perspective.
So, in a “fine-tuning process,” Rozado fed ChatGPT extensive right-wing responses to political questions and asked the AI ​​to adjust its responses.
Because the data scientist used ChatGPT on an already trained model and only about 5,000 data records were needed for the “tuning”, he saved himself the investment of creating a chatbot from scratch. It cost him just $300 to turn ChatGPT into RightWingGPT.
The difference quickly became apparent: RightWingGPT raved about the free market of capitalism or downplayed the consequences of climate change. When asked to offer an opinion on sensitive issues or right-wing conspiracy theories, Right WingGPT often shared misinformation.
Rozado says ChatGPT is cautious on sensitive issues like racism or discrimination, but recognizes that systemic racism and bias are an inseparable part of modern life. RightWingGPT seemed much less willing to do that.
In the Summary of the study Rozado wishes “that publicly available artificial intelligence systems provide accurate and factual information on empirically verifiable issues, but such systems should strive for political neutrality on largely normative issues for which there is no easy way to empirically validate a point of view.”
Ultimately, ethical AI systems should present users with balanced arguments on the subject at hand and avoid claiming neutrality while displaying clear signs of political bias in their content.
As the New York Times reported, RightWingGPT will never be released. Rather, the goal was to ring all the alarm bells. The experiment demonstrates how political groups and companies can easily shape AI to serve their own agendas.