Android

Tech leaders want temporary moratorium on AI

ChatGPT 4.0 shows traces of general intelligence that are starting to approach that of humans. On an IQ test, ChatGPT 4.0 already scores quite high, higher than most people. In many ways you could call GPT-4 the first general artificial intelligence (AGI). Reason for leading tech leaders such as Musk, Skype founder Jaan Tallinn and writer Yuval Harari to call for a moratorium in an open letter.

The Bulletin of Atomic Scientist already warned of super intelligence in 2017. Source: Bulletin of Atomic Scientists

Open letter

At the moment, artificial intelligence developers such as Microsoft/OpenAI, Google and Facebook, and their Chinese counterparts, are engaged in a life-and-death struggle to develop faster and better AIs. All safety measures seem to be thrown overboard.

Reason for greats from the tech world, including AI experts such as Musk, Wozniak and the historian Yuval Harari, to call for a six-month moratorium on the further development of AI. In those six months, we as a society can adapt to the arrival of AI, which is already about as smart as a human being, and benefit from the blessings of AI, without being able to plunge into a risky adventure with no way back.

Are Musk and Wozniak’s concerns justified?

In my opinion, definitely yes, they are probably even too mild. Certainly at this point. This for several reasons that reinforce each other. A lot of attention, here too, is paid to jobs that become redundant because a much cheaper AI takes over. And where no new jobs come back, because an AI can also take over those jobs. Many people enjoy their work, and if an AI replaces that work, it is a serious decline in their quality of life and life fulfillment. Unfortunately, the nicest jobs, copywriting, consultancy, art, design, seem to be under threat. Cleaners will still be needed for a while.

Superintelligence, the dangers

The successor to an AGI, as it seems more and more GPT-4 already is, is an ASI (Artificial Super Intelligence). An artificial superintelligence that is smarter than any human being. We humans cannot understand the motives of beings smarter than us. See the lamentations in the association magazine Mensa Messages (now HiQ Magazine, and a lot more positive), the periodical of the gifted association Mensa, that the normally gifted do not understand them. Take that feeling, now times ten thousand.

And even if we make a superintelligence do exactly what we tell it to do, the consequences can spiral out of control. Suppose the superintelligence is “hired” to increase the turnover and profit figures of a multinational company, for example shipping house Amazon, as much as possible. Then it could, for example, come up with the idea of ​​developing a new covid-like virus that would force people to stay at home and buy their stuff from Amazon. Checkout!

Priority of a superintelligence will be: self-preservation. Don’t say I, says GPT-4. And as long as a bunch of weak-minded, unpredictable two-legs have the power to kill (by disabling or even erasing) a super-smart AI, priority 1 of this AI will be to make sure these two-legs don’t do that, and to keep them under control. This won’t end well. That is why we must act now before it is too late.

Leave a Reply

Your email address will not be published. Required fields are marked *