MIT researcher compares handling AI with “Don’t Look Up”
MIT professor and AI researcher Max Tegmark is concerned about the potential impact of artificial general intelligence (AGI) on human society. In an essay for the Time magazine Tegmark paints a bleak picture of a future ruled by an AI that can no longer be controlled. He writes, “Unfortunately, I now feel like we’re witnessing the movie ‘Don’t Look Up’ to another existential threat: a universal superintelligence.”
In his opinion, governments around the world are reacting much too laxly to the growing AGI threat. The comparison with the Netflix flick “Don’t Look Up” is quite drastic.
After all, the film tells the fictional story of a team of astronomers who see an Earth-destroying asteroid hurtling towards our planet. The researchers try to warn humanity, but find that most of humanity doesn’t want to bother. At the end of the film, the earth is logically destroyed.
Editor’s Recommendations
Tegmark thinks this story can also fit the risk of using an AGI. He writes: “A recent poll found that half of AI researchers believe AI has at least a 10 percent chance of wiping out humanity. Since we’ve been pondering this threat and what we can do about it for so long – from academic conferences to Hollywood blockbusters – one might expect humanity to shift into high gear to steer AI in a safer direction than a runaway superintelligence.”
Instead, the most prominent reactions are just combinations of “denial, ridicule and resignation that was so darkly funny that it deserved an Oscar.”
The professor sees a very real threat in AGI. Human society is not doing nearly enough to stop it, or at least not aligning AGI with core human values. In the end, this could lead to mankind idly watching its own destruction, as in the Netflix film.
First of all, Tegmark’s assertion is certainly primarily provocative. It fails to take into account that many experts either don’t believe AGI will ever happen, or other experts see it coming but predict many decades of research will be needed before it does.
Editor’s Recommendations
“I am often told that AGI and superintelligence will not exist because it is impossible: human-level intelligence is something mysterious that can only exist in brains,” writes Tegmark. “Such carbon chauvinism ignores a core tenet of the AI ​​revolution: that intelligence is all about information processing, and that it doesn’t matter whether the information is processed by carbon atoms in brains or silicon atoms in computers.”
Tegmark is extremely pessimistic and predicts that an AGI will be available quickly. It is to be expected quickly, at least “in the short term than, for example, climate change and most people’s old-age provision”.
He refers to a current Microsoft study, which claims that the large language model GPT-4 from OpenAI is already showing “sparks” from AGI. He also refers to a talk recently given by deep learning researcher Yoshua Bengio, in which he also calls for action.
Tegmark accuses the AI ​​industry of not having focused on “slow and secure development” so far. He warns against teaching an AGI how to program. Nor should it be connected to the Internet or even given a public API.
“Though humanity is racing toward a cliff, we’re not there yet, and there’s still time for us to slow down, change course, and avoid a fall — and instead reap the amazing benefits that a safe one can provide.” , customized AI has to offer,” writes Tegmark. “You have to be aware that the cliff actually exists and that falling off it is of no use to anyone.”