Android

ChatGPT also under scrutiny Dutch AP

The Dutch Data Protection Authority, which monitors our privacy, is concerned about the rapidly advancing ChatGPT phenomenon. According to the AP, OpenAi, the company behind ChatGPT, stores users’ personal data. Reason to start an investigation.

Personal information in training data

The Dutch Data Protection Authority is particularly concerned about what exactly happens to the data on which ChatGPT has been trained. This is data that we have all placed on the internet in recent years.

OpenAI and other AI-developing internet companies have used the content of social networks, Wikipedia, forums and other publicly available information to train their neural networks.

The personal data authority is afraid that this information will be used to compile a personal profile of users. In the first four months, ChatGPT has already been used by more than one and a half million people.

The biggest immediate threat, therefore, is that the questions users ask ChatGPT are used to train the model. Suppose that these are rather privacy-sensitive questions, for example about sexual orientation, politics or sensitive financial information, then it can of course have very unpleasant consequences if this information ends up with malicious parties.

The board of the Dutch Data Protection Authority. Source: Dutch Data Protection Authority, press material

What major language models like GPT-4 and Bard’s Palm V actually do is mix up existing text and make their own texts out of it. In addition, it is possible that privacy-sensitive information ends up in the text of an answer to someone else. For that reason, some big companies like Samsung, the banks JP Morgan Chase and Deutsche Bank and of course defense companies like Northrop Grumman have banned their employees from using ChatGPT.

Wrong information

A second reason why the data protection authority decided to take a closer look at ChatGPT has to do with the possible spread of incorrect information. As shown by the ChatGPT recipe for delicious pasta with green turnips and engine oil, it’s not that hard to get the AI ​​to sell dangerous nonsense. It is therefore quite logical that the Authority is concerned about this.

And then there is the problem of potentially discriminatory algorithms. These algorithms must meet the requirements of the Personal Data Protection Act (WPA).

European response

This problem is bigger than just the Netherlands, the other European supervisors also realize. That is why they have put together a working group that will develop policies to better deal with the challenges presented by ChatGPT and similar generative artificial intelligences.

The responsible Dutch minister, Camiel Weerwind, said the following. Generative AI, such as the large language model artificial intelligence (AI) system ChatGPT, is a cross-border phenomenon that requires a harmonized approach. That is why the AP attaches great importance to effective joint action by the European privacy supervisors”.

Weerwind also trusts that the authorities will act accordingly, so, in the context of its information task, it will speak out if there is reason to do so.

Leave a Reply

Your email address will not be published. Required fields are marked *