“I don’t think you can stop AI anymore”
Responsibility. Dan Schulman, CEO of Paypal, uses this word again and again in the panel discussion at the Vivatech in Paris on June 14th. In the talk, titled “Equity, Access, and Trust: Tackling the Toughest Questions in Tech,” he joins Verizon Communications CEO Hans Vestberg and The Atlantic’s Nicholas Thompson.
When it comes to the topic of AI, Schulman finds clear words. According to him, the development of artificial intelligence can no longer be stopped. On the contrary: According to Schulman, AI will develop faster and faster, especially if you throw other technologies like quantum technology into the mix.
He sees great opportunities from AI for society and for companies. But he is even more afraid of the risks of AI and possible “edge cases”. “If we’re not careful, AI can have unintended consequences,” says Schulman.
In order to avoid this, entrepreneurs who are aware of their responsibility are needed. And: strict regulatory requirements similar to those for nuclear weapons.
However, implementing the latter is not that easy. “I feel sorry for the regulators,” says Schulman, “because things are moving so fast. It’s very difficult to keep up with that.”
Schulman is not alone in comparing AI to nuclear weapons. As recently as May, leading AI researchers and CEOs from the industry together warned of the risks of artificial intelligence and compared it to nuclear weapons, as t3n reported.
“Containing the risk of extinction caused by AI should be a global priority alongside other societal risks such as pandemics and nuclear war,” reads the short statement, which is available on the website of the Center for AI Safety can be found.
Editor’s Recommendations
It was signed by more than 350 AI experts, including OpenAI CEO Sam Altman, Deepmind CEO Demis Hassabis and Anthropic CEO Dario Amodei.
Geoffrey Hinton and Joshua Benigo are also among the signatories. The two AI researchers were awarded the Turing Award in 2018 and are regarded as the “godfathers” of modern AI.