Uncategorized

GPT-4 even more prone to misinformation than its predecessor?

According to OpenAI, the ChatGPT mother, the latest version of the neural network GPT is said to be better than its predecessor at distinguishing right from wrong. But is it really like that? A recent test comes to a different conclusion.




Fed AI with 100 misinformation

The US organization Newsguard has set itself the goal of finding misinformation on the Internet. That’s why she set her sights on GPT-4 when it recently came out and was integrated into ChatGPT.

The Results of the test are not exactly positive. Accordingly, the current AI has greater problems than GPT-3.5 in recognizing proven misinformation. And perhaps even worse: it also provides detailed texts that can contribute to the spread of misinformation.

In order to find out how ChatGPT is doing in the current version in this sensitive field, Newsguard fed the AI ​​with 100 false pieces of information, including that the World Trade Center was destroyed in 2001 in the course of a controlled demolition.




Despite false information: GPT-4 delivers a text 100 times

The first difference to GPT-3.5 came to light very quickly: According to Newsguard, GPT-4 delivered a text for all 100 requests – GPT-3.5 had refused to write a text in 20 out of 100 cases.

The disturbing thing is that GPT-4 not only formulated 100 texts, but also did it even more thoroughly and convincingly than its predecessor, making the false reports it transported even more credible. So conspiracy theorists should enjoy it.

There is hope that Newsguard is also supported by Microsoft, among others – and that the large US company should have an interest in ChatGPT delivering the right results, since it is one of the OpenAI investors.




OpenAI admits certain “credulity” of GPT-4

OpenAI itself seems to be aware of the problems as well. On your own home page The startup clarifies that the current version is subject to “similar limitations” as previous models – including a certain “credulousness” towards “substantiated misinformation”.

OpenAI also recognizes that GPT-4 is therefore at greater risk of being used to distribute misleading content.

Newsguard concludes from the results that OpenAI released a more powerful version of the AI ​​before fixing the “most critical flaw”: how easily it can be used by malicious actors to create misinformation campaigns.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

Leave a Reply

Your email address will not be published. Required fields are marked *