You should not tell the AI bot every piece of information
Law professor Rob Nicholls from the Australian University of New South Wales in Sydney warns of the risks of using ChatGPT and other AI bots in business processes. He advises in a post that users should never feed ChatGPT with confidential information TechXplore.
Anyone who has used ChatGPT before will have been pleased with the high accuracy and usefulness of the answers. In doing so, various important aspects are often forgotten, to which Nicholls now points out.
According to this, the main risk is that the AI needs a qualified question in order to create a qualified answer from it. The probability is therefore high that the qualified question contains sensitive or even secret information that should not be made accessible to the AI.
And the content of this very question would be part of the future training data set with almost 100 percent certainty. This could cause all kinds of problems.
Editor’s Recommendations
It is possible, for example, to inadvertently infringe copyright if appropriately protected information is released. However, the violation of trade secrets or violations of data protection, such as the GDPR, would also be an option.
For example, Samsung recently faced issues resulting from employees using ChatGPT as part of the code development process. According to Nicholls, this is quite tempting, because the ChatGPT automation could significantly reduce the programming effort in software development projects.
ChatGPT can even automatically detect the programming language used and improve existing code. This is how internal Samsung code ended up in the OpenAI training data set.
On top of that, Samsung also used the new version GPT4 ChatGPT to record meeting notes. Namely, GPT4 has a very accurate voice-to-text feature, making it an easy way to transcribe work meetings and even create minutes before the end of the meeting.
Nicholls therefore suggests carefully considering the inputs to an AI bot beforehand. If the prompt contains data that is worthy of protection from any point of view, it should not be sent as it is. Material that would otherwise never be shared outside of the company should not become the basis of an AI request.
In Italy, there was even a decision to “ban” ChatGPT over privacy concerns. The main argument of the Italian authorities was that the data collected by ChatGPT violated the European data protection regulation.
It looks like Italy, in line with other European countries, will decide to move away from this approach. The only change that is likely to be required is age verification (over 18 years) of users.