Uncategorized

How do we protect ourselves from disinformation?

The AI ​​chatbot ChatGPT answers user questions extremely quickly and in amazing quality, writes original poems or song lyrics and even programs HTML or video code. Many think that’s great – others urge caution. Both are appropriate.

A guest contribution by Kai Gondlach

Alexa and Siri, shopping algorithms and AI in video games have long been part of everyday life. The recently released chat bot ChatGPT is the next evolution of this AI development. Thanks to pattern recognition and lightning-fast connection of a request with the appropriate data set, such language models create texts that can hardly be distinguished from human ones (see the GIGA interview with ChatGPT).

What opportunities lie in AI-generated texts?

The language models are already writing standardized texts for product descriptions, about the weather, sports results or horoscopes, and relieve customer service. Such programs save companies time and money, especially in times of cost pressure on employers and a shortage of skilled workers.

The quality of the AI ​​results depends to a large extent on the data with which they are trained. This is where the world’s largest tech companies are in a stellar position: They recognized the value of data early on and created data silos.

The expression, which was still visionary at the time, also dates from this period “Data is the new oil” (Source: The Economist). Now Alphabet (Google), Microsoft, Meta (Facebook), Apple, Amazon and others have collected gigantic mountains of data in the last two decades – and thus a large lead in the development of AI systems.

About the guest author
Kai Gondlach studied sociology, politics/administrative science and future research. He is a freelance author, keynote speaker, podcast host and managing director of Leipziger PROFORE Gesellschaft für Zukunft mbH, a young institute for future research and strategy consulting. As a member of academic future research, he works in the environment of UNESCO and the Club of Rome on the implementation of important future topics.

What are the risks of AI-generated texts?

But tools like ChatGPT can also be used Propaganda, disinformation or hate campaigns even easier to generate and multiply. As we know, the path from a virtual threat to a physical one is not far, see for example the Dragonlord case (source: RND) or a lawsuit against Meta in connection with the civil war in Ethiopia (source: Handelsblatt).

In addition, many people rely on forums on the Internet to get household tips or programming help – the latter can, in the worst case, lead to faulty codes, which, especially in larger companies, result in extreme costs or even paralyze critical infrastructure could. Accordingly, the coder platform Stack Overflow recently temporarily banned the use of ChatGPT.

You can see how well AI systems can create art in the following video:

What weaknesses do AI systems have?

As already mentioned, an AI tool can only be as good as the data it is trained on. From a statistical point of view, however, these are usually incomplete and, in the worst case, distorted or even discriminatory. So it happens again and again that artificial intelligence Does not recognize people with darker skin tonesfor example on the soap dispenser (source: Deutschlandfunk), or disproportionately classified as a threat in security software (source: mirror).

Microsoft’s chatbot Tay, one of the predecessors of ChatGPT, also made racist comments shortly after its release (source: The time). Obviously, the ethical foundation was missing during development to prevent such errors before they are used – although it has now been well researched how training data for algorithms can be processed more neutrally.

Automated, intelligent systems are convenient for us humans, but they also carry the risk that we become too dependent on the beneficiaries of the respective AI. Because the AI ​​has no end in itself or even its own, possibly altruistic motive, but only serves that Increase in company saleswho control them and their data streams.

We note: The more people are connected to the Internet, the greater the potential benefit, but also the potential harm that arises from AI texts. Let’s play a round of Black Mirror and look into three scenarios of the not-so-distant future that make the dangers of AI language models even more tangible.

3 scenarios how damage is done by AI chatbots

Scenario 1: The Tinder rip-off

Chris is looking for love and sees a “match” in his app: Toni looks great, has the right taste in music, is the same age and lives in the same city. That could work! Chris immediately writes a message: “Hello Toni, nice to meet you – your favorite Doors album is here on my shelf as a record, what a coincidence!”

This developed into a lively chat over several days. Chris has Butterflies in the stomach and almost bursting with excitement, and on the first real date is sitting in the agreed place much too early. Suddenly Toni writes: “Unfortunately, I have to cancel our date – I’ve just had a car accident and I’m not insured, let me know!” Of course, Chris immediately offers help, if necessary with money. Toni accepts with thanks, Chris transfers 350 euros, there’s nothing to it.

What Chris doesn’t know: Toni is an AI and just pulled the same perfidious trick with 7,000 other people around the world.

We tell you a few real dating tips in the following video:

Scenario 2: Business Devil

With ChatGPT in combination with video AIs such as Dall-E, a fake person can also be added to the first scenario – and thus video meetings can be manipulated.

Imagine you have a pitch appointment with the co-founders of your startup prominent investor in San Francisco, while you are sitting in Berlin – a unique opportunity! Everything is going according to plan when suddenly the connection with your counterpart is briefly interrupted. After a few moments everything is fine again and you continue with the presentation.

Your pitch went perfectly, but the reaction from the other side is unexpected: The investor is completely upset, explains the weaknesses of your business plan and then immediately kicks you out of the meeting. You are completely perplexed, an argument ensues, and you bury the idea of ​​the startup.

What you didn’t know: The whole action was launched by a competing startup, which did not get its own pitch ready in time. Your opponent kicked the investor out of the meeting, showed you a deep fake version of his rebuff and later convinced the right investor with your idea.

Scenario 3: Panic due to fear of war

Since the start of the war in Ukraine in February 2022, there has been a veritable fake news tsunami. Countless videos have circulated that are said to show alleged combat operations or strange speeches by high-ranking military officials. This unsettles citizens as well as politicians, as in the following scenario.

hundreds Videos of a nuclear weapons pre-emptive strike from Russia to a military base on the Ukrainian-Polish border are circulating on all social platforms. Some show how increased radiation is measured in the NATO state of Poland. Western politicians are trying to understand the situation, but communication with Russia has been severely restricted since the beginning of the war. The US nuclear weapons in Europe are therefore being positioned.

While politicians and the military gather the facts and state that this attack never took place, the real damage has long since happened: one broke in parts of the population mass panic out, which sometimes took on chaotic and violent traits. Dead and injured are the result – and all because of AI-generated fake videos.

How can I recognize (AI) fake news?

After these three very pessimistic scenarios, the question arises as to how we can protect ourselves and our families or companies from disinformation. So how do I recognize AI texts? That’s not so easy, because AI-generated texts today are about as good as the average human textso not so bad that you could recognize them immediately or automatically.

The language models such as ChatGPT are also continuously being further developed. Detection software would probably always be too slow to detect even the latest versions. In other words: It is unavoidable that more and more AI content is haunting all media.

So it’s high time for a binding code of journalistic work, which makes the use of AI tools like ChatGPT transparent. the Press Code so needs an addition here. A notice that a text was written with the support or entirely with the help of an AI would then be mandatory and in the event of non-compliance significant penalties would be associated with it. Just like the current code.

What do we need as a society to protect ourselves from AI scams?

The greatest need for action is in the education system, politics and of course the judiciary. The staff at these central pillars of our society urgently need Crash courses in dealing with the digital possibilities of the 21st century, if we don’t want to become even more polarized and alienated by conspiracy myths.

At that point, at the latest, it becomes clear that the classic division of tasks between ministries and within companies often falls short; Media competence is an issue for everyone, more important than ever and the basis for a mature society.

Leave a Reply

Your email address will not be published. Required fields are marked *