Android

The ‘Godfather of AI’ stops at Google and warns of danger on the horizon

“I’m Sorry Dave, I’m Afraid I Can’t Do That.” It is clear that artificial intelligence is not a new bogeyman. already in 2001: A Space Odyssey, Hal 9000, a robot with a human personality, was the antagonist. The Terminator is another example of a (at least apparently) self-aware robot. But Skynet and Hal 900 were conceived in a world where self-aware technology was nothing more than pure science fiction. What we call AI today may not yet be self-aware, but warnings about the dangers are growing. This time by someone who is very close to the matter. In a New York Times piece, Geoffrey Hinton warns of what he believes lies ahead.

Artificial Intelligence Pioneer

Hinton is not just any old man. He spent the last ten years at Google where he was instrumental in the technology now used by ChatGPT, for example. So now he has his doubts: “I console myself with the usual excuses, namely, if I hadn’t done it, someone else would have.”

According to Hilton, his departure has nothing specifically to do with Google, but more with the technology itself. Google in turn praises Hilton and indicates that it will continue to follow the current path. The company is clearly (at least publicly) not concerned about anything, but why does Hilton see bears on the road? No, not nuclear wars or refusal of service after you’ve asked a hundred times if that outer door can finally be opened. Hilton is all about jobs and ‘fake news’.

AI on tilt

In a world where artificial intelligence is advanced, a large number of jobs will be lost. Operating machines, customer service, but also producing images are all tasks where the technology is currently being trained. If developments go too fast, a good number of people will suddenly be on the street. Hilton is concerned about the disruptive impact this could have on society.

Hilton is also concerned about the potential to spread disinformation. Current AI models are generative, that is, they produce matter themselves. It may still be clear when an image was produced by an artificial intelligence, but developments are going fast. If it continues like this, in the near future it may no longer be possible to determine which images are fake and which are real, with all the consequences that entails.

Take the following example: Frank van Harmelen is a professor of artificial intelligence at the VU and recently asked an AI to list all his eight books. The fact that Van Harmelen only wrote six did not bother the chatbot, the program neatly produced a list of eight books with very credible titles. “If you make that technology available to ordinary citizens, and they do not know how to separate the sense from the nonsense, it will quickly lead to problems.” That’s what he told NOS.

Then we have not yet talked about the possibilities that autonomous weapon systems have to offer. Of all the applications already mentioned, this one seems the most distant, but developments, as mentioned earlier, are going fast.

More people with question marks

Not for the first time, the alarm has been publicly raised. Last March, dozens of tech CEOs, professors and researchers signed a petition calling for a pause in AI research. The signatures of big names such as Steve Wozniak (Apple co-founder), Andrew Yang (politician) and Elon Musk (son of an emerald mine owner) were also included. To date, companies such as Google and Meta (Facebook) have not responded.

Fortunately, AI is also increasingly being taken into account in the legislative field. For example, within the EU, where a proposal is ready to regulate generative AI models (such as ChatGPT).

Leave a Reply

Your email address will not be published. Required fields are marked *