Android

AI could lead to human extinction

A group of tech leaders wrote a new alarming statement. Their warning makes sense, but opinions differ.

Street in the once bustling city of Pripyat in Ukraine, decades after a reckless Soviet experiment exploded the aging breeder reactor at Chernobyl. Our foreland? Source Jorge Franganillo, https://www.flickr.com/photos/franganillo/ (CC-BY-SA 2.0)

22 words warning of extinction

AI giants, including Elon Musk, have already warned in an open letter about the dangers of artificial intelligence and called for a temporary moratorium. They mainly focused on the possibility that artificial intelligence will become smarter than humans and therefore unpredictable. They were concerned about the possibility that AI will pursue its own ends.

In a new statement, tech leaders shared these concerns. This time among the signatories is none other than Sam Altman, director of OpenAI. That is the company behind ChatGPT. Altman is joined in this call by Demis Hassabis, the CEO of Google DeepMind (developer of TensorFlow, which forms the basis for CHatGPT, among other things), and leading AI researchers Geoffrey Hinton and Stuart Russell.
Notable absentees are (so far) Elon Musk and CEOs of other large companies that are fully engaged in AI, such as Meta (formerly Facebook) and Apple.

The statement has only 22 words and reads in full: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. Freely translated into Dutch this is:

Preventing AI-induced extinction should be a global priority, alongside other societal risks such as pandemics and nuclear wars.

What is the risk of extinction?

An artificial intelligence that surpasses that of humans many hundreds of times can be compared to a human in a confrontation with an anthill. We don’t worry too much about ants. When they become annoying, for example because they have created a scent trail to the sugar bowl and siphon the unhealthy contents into their storerooms with military efficiency, we place an ant bait box or throw kerosene into an ant nest.

Of course, we humans are a lot more annoying pests than ants. After all, we are a lot smarter. We can hit the power button, drop a nuclear bomb on the AI, or blow up the Earth (with the AI) with characteristically human stupidity. The average ant does not. So there is a chance that an AI will, at best, lock us up for our own good in a kind of terrarium, for example a virtual world. In the worst case, we will cease to exist as a species.

How does Ai ‘think’ about it?

We ask ChatGPT 4.0 what is the biggest threat to the world.

In this generated text, GPT-4 identifies itself as human. It is still not clear at GPT-4 whether this Ai has already crossed the threshold for self-awareness or not yet. It is probably an artifact of training on millions of texts written from a human perspective.

How real is the danger?

Some think it will be decades, maybe even a century, before artificial intelligence reaches the AGI (general purpose artificial intelligence) stage. Personally, I do not share this view and I think we are much closer to this point, perhaps a few months or years at the most, than most think. We could live with an intelligence comparable to that of a human being.

It will be a whole different story if this develops into a super intelligence. For example, an unscrupulous superintelligence can manipulate people into releasing it. It is not very difficult for psychopaths to manipulate people. For example, convicted murderer and classic psychopath Joran van der Sloot receives money from female admirers worldwide. Let alone a super smart entity that has cracked the secrets of human psychology and sees people as a kind of handy meat robots.

Safety precautions

That is why it is good to set global security requirements now and to ensure that we can use the many positive sides of powerful AI, such as solving difficult problems, without the negative sides having devastating consequences. The positive sides are enormous. For example, AI is already being used to develop medicines and crack difficult scientific problems. But unfortunately also by criminals, for example to clone the voice of an acquaintance and thus persuade the victim to send a lot of money, or to send convincing personalized spam.

This safeguard can be done either by somehow baking in a code of ethics, such as the Three Laws of Robotics by the late Isaac Asimov, or an equivalent of human mirror neurons that make us identify with other humans, or by preventing these AI is able to manipulate the outside world.

Leave a Reply

Your email address will not be published. Required fields are marked *