Uncategorized

OpenAI co-founder no longer wants to conduct AI research publicly

With a 98-page paper, OpenAI presented its new language model GPT-4 this week. What resourceful AI researchers first noticed about the description is a serious lack of information.

There are numerous benchmarks and other test results for GPT-4. There are also impressive demos. But there is almost no information about the data used to train the system. Nor does OpenAI comment on the specific hardware or methods used to train the model.

This upsets many in the AI ​​community. It is criticized that OpenAI was founded as a research institution dedicated to openness. In addition, openness is important so that protective measures can be developed against the type of threats that emanate from AI systems such as GPT-4.

OpenAI itself is very clear in the paper mentioned:

Considering the competitive environment and security implications of large-scale models like GPT-4, this report does not provide any further details about the architecture (including model size), hardware, training calculation, data set construction, training method or similar.

OpenAI chief scientist Ilya Sutskever opposed this approach The Verge defended. It is obvious that OpenAI’s reasons for not sharing information about GPT-4 are fear of competition and fear for security.

After all, the competitive situation is very tough, said Sutskever. GPT-4 is not easy to develop and has tied up almost all OpenAI employees for a very long time. At the same time, there are many companies working on similar tools. In this respect, a “maturation of the field” should be striven for.

Sutskever continues: “As far as safety is concerned, I would say that it is not yet as important as competitiveness. But that will change, for the following reasons. These models are very powerful, and they are getting more and more powerful. Eventually it will be easy to do big damage with these models if you want to. And as the capabilities continue to grow, it’s only logical that you don’t want to reveal them.”




Sutskever: Clearly open AI is a bad idea

He “firmly believes that in a few years it will be absolutely clear to everyone that it is not wise to disclose AI.” The fact is, however, that this new approach is a departure from the founding ideas of the company in 2015 .

At the time, Sutskever and others had stated that the organization’s goal was “to create value for everyone, not for shareholders,” and to that end it would “collaborate freely” with others in the field. Consequently, OpenAI was founded as a non-profit organization.

In 2019, the company turned into a “capped profit organization.” This made OpenAI attractive for billions of dollars in investments. Microsoft, in particular, made use of this, to which OpenAI now grants exclusive business licenses.

It was simply a mistake to believe that open AI research is the right way. “I fully expect that in a few years it will be absolutely clear to everyone that it is simply not wise to open source AI,” says Sutskever.




Does closed research endanger security?

In the community of AI researchers, the first reactions to the closed model of GPT-4 were mostly negative. Ignorance of how the system was trained made it difficult to assess where the system could be safely deployed. It is also difficult to propose corrections.

“In order for people to make informed decisions about where this model doesn’t work, they need to have a better sense of what it does and the assumptions it makes,” NomicAI CEO Schmidt told The Verge. “I wouldn’t trust a self-driving car that was trained without experience in snowy climates; it is likely that there will be some gaps or other issues that may arise when used in real situations.”




Fear of copyright as a reason for closed research?

Other experts see dangers in legal liability as a further reason for keeping the training data sets secret. After all, AI language models are trained on huge text datasets. This also includes reading information from the Internet. As is well known, it is not always easy to ensure that no copyrighted material is used.

A problem that operators of AI image generators have had to face more and more recently. Several AI companies are currently being sued by independent artists and picture agency Getty Images.

When it comes to the question of security in the application of AI, Sutskever is completely with his critics. Of course, it would be good if as many people as possible studied the AI ​​model to improve it. It is precisely for this purpose that OpenAI has granted certain research institutions access to its systems.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

Leave a Reply

Your email address will not be published. Required fields are marked *