Android

AI chatbots pose a threat to human contact

Our editors share a lot in common: a heightened interest in Android and a great need to race their fingers across keyboards. But, that doesn’t mean they always agree. In AW Discussion Thursday we put each other to the test with a statement every week. Last week we discussed Twitter alternatives, this week we have to talk about artificial intelligence. Do those AI chatbots together form a threat to human contact?

AI chatbots

Claudia: Should we be happy with those chatbots? Will we soon be able to distinguish between human work and AI chatsbots?

Laura: I think making a distinction is becoming increasingly difficult, but the question is: do we mind? What do you think?

Claudia: That’s a bad thing, because AI sometimes makes some mistakes and that must be clear to the user.

Laura: But people make mistakes too, right?

Claudia: Yes, that is true, but I think it should be clear at all times if something is created by AI or by humans.

Laura: Sure, you end up relying more on the human ‘on the other end of the line’ than when that is AI. Quite crazy actually, because AI has much more power when it comes to quickly processing a lot of knowledge.

Claudia: On the other hand, they can also pose a threat to human contact. They are going to take over quite a few jobs.

Laura: But doesn’t that mean that we as humans can be much more creative?

Claudia: I hope we use AI as a tool now and in the future. Our own creativity must always come first. Otherwise, you let the bot think about everything.

Related articles

Friends with artificial intelligence

Laura: Sure, but how great would it be if you could let the AI ​​run your entire customer service, and the people who were there before can start working in healthcare, for example? This way they can help people in a way that AI could never do.

Claudia: That would be great. And that jobs are lost due to new sectors, that is of all times.

Laura: But how would you like it if you became friends with AI on the internet, but you didn’t know it was AI?

Claudia: I would hate that, and see it as one of the threats of AI. Deepfakes, AI art, etc. These are all developments that can sometimes go in the wrong direction. Because what about copyright? AI gets its information from the worldwide web. And in the case of the images, there are no credits for the real artist yet.

Patrick: Have you seen this story? In this case, the perpetrator knew he was talking to an AI.

Claudia: OMG. He even made this chatbot himself.

Patrick: “There are also known cases in which an AI has incited suicide.

Laura: They should do something about that: it should be fairly easy to filter out, wouldn’t you say?

Patrick: I think it should definitely be known if something has been done by an AI, whether it’s art or a comment under an article.

Claudia: AI should give a warning in those cases.

Laura: Would you still want to be friends with AI if you found out it was AI? or if you knew beforehand?

Claudia: I would like to have AI friends. Or does that sound lame?

Laura: Not bad at all. I could certainly be friends with AI, especially if it likes the same music and topics as I do, and of course I do, because it’s personalized.

AI or not?

Patrick: I don’t want to be friends with an AI, I would like to use AI. I can distinguish between an AI and a human, but there are many people who cannot. And it is precisely those people who need to be protected in one way or another.

Claudia: I also think AI development is going way too fast right now and needs to be paused before we lose control of it. Then we can tackle these kinds of dangers.

Patrick: Should there be an international law that makes it mandatory that an AI informs the user that it is an AI? Are companies that use AI obliged to provide the right care that their users are well informed and check that the interaction does not go too far?

Claudia: Yes, this kind of legislation should definitely be introduced. Fortunately, this is also being discussed at European level. Only the developments in the field of AI are going 100x faster.

Laura: I think that legislation is indeed necessary. But the control you propose is dangerous: are people automatically reported to the police when they ask the AI ​​how to assemble a hydrogen bomb?

Patrick: No, if they ask the AI ​​where they can order the necessary materials.

Laura: You don’t mind that someone can always read into your conversations?

Patrick: No, because who guarantees that the AI ​​will not discuss our conversations with others?

Laura: Okay, but then AI will become less useful and your friendship with AI will also be less profound, because you will no longer want to share certain things…

Related articles

Policy and control

Claudia: Google employees now do manual assessments of Bard conversations.

Patrick: Do you want to foster friendships with AI? I personally don’t. Especially not as long as there is no control. Look at what fake news causes, what happens when people enter into friendships with AIs that then start to mislead people, whether intentionally or not?

Laura: I think it can also save a lot of lives of people who are lonely, Patrick…

Patrick: Completely agree, but those same people are also very impressionable and that is why control is important.

Claudia: Privacy must be guaranteed, and indeed control is important. But there must also be control over how Microsoft and Google protect user privacy. Should tech companies give users’ private data to government authorities if they request it? This is already the case in China.

Patrick: Perhaps in the future that control can be largely released, but as long as there is no legislation and AI still makes too many mistakes, an independent institute should exercise those controls

Laura: I think there should also be more warnings built into AI to show that an answer could be wrong, and maybe even based on what info the AI ​​says it.

Patrick: The article I quoted earlier concludes with: According to Antheunis, it is important that software companies must always be transparent, and must keep repeating that users talk to a chatbot instead of a human being. According to her, that is the big problem with apps like Replika: developers want to keep their customers as long as possible. “They are doing too little now. They benefit more from financial income. Only at the beginning do they make it clear that it is a robot.” Her advice is therefore: always realize that you are not talking to a human being, no matter how convincing the app is. The bot is programmed to pretend she likes you, the bot doesn’t really like you. And as a parent, also pass this on to your child if they try out the chatbot offered by Snapchat, for example.

Claudia: Totally agree.

Laura: I’m back, my AI had taken over the chat for a while so I could type an article. Everthing okay?

What do you think: are AI chatbots a threat to human contact? Have your say in the comments below this article.

Related articles

Leave a Reply

Your email address will not be published. Required fields are marked *