Android

Microsoft asks US government to regulate AI

Pigs can fly, political division is at an all-time low, and US companies are begging the government for more regulation!

Microsoft is entering the debate about regulating AI technology

If you had asked me a year ago which of the above three “news items” would become true, I certainly wouldn’t have gone for the latter. Corporations in general, but US corporations in particular, have a long bloody tradition of suppressing regulations. Whether it concerns safety requirements, emission restrictions or trade unions. Everything that depresses profits has to go out the door.

It is yet another case in which so-called artificial intelligence appears to be the exception to the rule. Normally, corporate America screams bloody murder at any kind of regulation. But now Microsoft is joining a long line of companies asking the U.S. government to speed up regulation of “AI.”

Microsoft president Brad Smith speaks

In a speech in Washington, which was attended by several congressmen, Smith called on the Biden administration to curb developments regarding so-called artificial intelligence. He added his voice to the growing voice calling for the creation of a special government department to deal exclusively with “AI”. It must be said, however, that the companies mainly focus on regulating the use of “AI” by the government.

But they also put their hand in their own chest. Apparently men like Smith are incapable of controlling their own companies. Either that, or they have greater fears of possibly falling behind in the “AI” race than of the consequences of that race.

A plan in five points

Smith came up with a plan consisting of five points, which, according to him, safeguards all interests. Namely limiting the risks, preserving a liberal market and staying ahead of arch-rival China. The wording suggests that it is more an attempt to influence the current trend towards regulation than a genuine desire to curb a potentially dangerous technology.

In the hour-long speech, Smith spent as much time promoting Microsoft tech as calling for regulation. But it’s a plan! Here are Smith’s five points:

  1. Implementing a security framework driven by the government. Safety requirements, licenses, things like that. In consultation with companies, the government must write out the details and then ensure compliance.
  2. A safety brake for AI systems. Given the many voices saying that developments in the field of “AI” are moving too fast and entailing too many risks, there must be a way to continue working without “AI”. Imagine a water purification system that many thousands of people depend on for drinking water runs entirely on “AI”. According to Smith, it is essential that the “AI” system can always be switched off. The actions are then taken over by people. All without jeopardizing the functioning of the system.
  3. Legislation based on the basic technology behind AI. This part of the proposal is even more crowded with buzzwords and jargon than the previous ones. Essentially it comes down to the following. Legislation will have to take into account the way “AI” integrates into other systems. This mainly concerns liability. When a system with an “AI” component causes damage, it must be clear from whom that damage must be recovered. The developer of the “AI”, the company behind the application of the fashion, or the administrator of the specific system.
  4. Transparency and access for academics and nonprofits. In short, companies shouldn’t guard technology jealously behind closed doors. One must be able to explain how the systems function (unfortunately, this is where things often go wrong). Scientists and non-profits must also be able to have full access to “AI” systems for research and the like.
  5. Forging new links between public and private entities to use AI to solve the problems that “AI” will cause in society. Unemployment due to replacing jobs with “AI” models, disinformation and the like will cause major problems in the future if we do not come up with a plan now. The consequences of “AI” must be closely monitored and addressed in collaboration between the private and public sectors.

Too important not to regulate

What will happen to Smith’s plan remains to be seen. Governments are working en masse to understand the new developments and to regulate them where necessary. It is certain that new laws and regulations will be introduced. It seems that the recent actions of CEOs and company presidents are mainly an attempt to gain traction. For example, Smith also asked Biden to sign a so-called ‘executive order’ that would ensure that government agencies release rigorous risk management on the possible “AI” systems they want to deploy.

“AI is too important not to regulate and too important not to regulate properly.” Nice words from Google president of global affairs. He’s certainly right. But perhaps a technology that provokes such reactions should not be in the hands of entities that are only out for monetary gain in the first place.

Leave a Reply

Your email address will not be published. Required fields are marked *