Europol clearly warns of the risks that can arise from AI language models such as ChatGPT. In a first analysis, the EU police authority sees dangers in fraud through social engineering, disinformation and cybercrime.

Europol: ChatGPT can help criminals

The European Union Police Authority has responded to the Risks of AI language models like ChatGPT pointed out. A recently published analysis by the Europol Innovation Lab warns of new phishing attacks, among other things. Criminals could use ChatGPT for authentic-looking texts. In addition, language models could imitate the language style of certain people and thus more easily deceive recipients.

Also for propaganda and fake news ChatGPT could be used, since messages can be created much faster and more easily than before with the desired narrative. Since only language models were evaluated, the analysis does not contain any indications of image generators. Artificially generated images can also cause fake news.

This is how AI images are created:

Another criminal area of ​​application already concerns the generation of concrete information. Language models make research easier and provide key information that can be misused by criminals. For Expert knowledge of burglary or terrorist financing insider information is no longer necessary, according to Europol.

Europol: prevent ChatGPT abuse

The Europol Innovation Lab concludes that law enforcement agencies also have a leading role to play in anticipating and preventing misuse of AI tools. Language models would continue to evolve and could even become one of the most important criminal business models become. It is crucial to closely monitor the development of AI (source: Europol).

Don’t want to miss any more news about technology, games and pop culture? No current tests and guides? Then follow us
Facebook
or Twitter.