Uncategorized

Does artificial intelligence need its own law?

The technical possibilities of AI are constantly improving. This entails expansions of the legal requirements. Our guest author gives an overview of current and planned legal bases.

Although there is no uniform definition for the term artificial intelligence, some decisive properties can be named for a better understanding. In order to be called AI, an application should be capable of autonomous action and independent learning processes in order to deliver a certain output or to be able to make forecast decisions.

In addition, it should be able to adapt to its environment. Often one encounters the distinction between strong and weak AI, the former being capable of logical-intellectual thinking and complex abstractions. The latter is characterized by pattern recognition and the ability to react to unknown problems. However, weak AI cannot abstract and can only be used in a certain field of application. As a rule, today’s AI applications are classified as weak AI, but they can still be capable of high technical performance.

In addition to chatbots, AI applications can record a wide range of concerns and process them independently. For example, there are systems that use medical imaging to identify tumor structures and make an initial diagnosis. Processes are also already possible in which an AI can identify people in the aisle in video sequences or recognize lung diseases such as Covid-19 via the sound of breathing.

As the capabilities of AI continue to increase, there is also an increased need for legal requirements. Because AI is increasingly used in places where it makes decisions with an indirect or direct effect on individuals, ethical considerations form the basis of regulation. It is precisely the strength of AI that it can solve similar cases very quickly, efficiently and with a low susceptibility to errors. On the other hand, however, this means that sometimes complex moral issues are ignored and decided “on the fly”. The starting point for an ethical approach is Article 1, Paragraph 1 of the Basic Law (or, at European level, Article 1 of the Charter of Fundamental Rights), the guarantee of human dignity.

Accordingly, the human being must not be made an object. For AI, this means in particular that algorithms that make decisions that have an impact on individuals must not disregard any fundamental rights or freedoms. Such a risk exists, for example, in situations in which an AI has to make decisions about the lives of those involved in an accident in a self-driving vehicle and when it decides on the creditworthiness or criminal liability of a person. These and many other questions cannot ultimately be legally regulated, but a legal framework can help to draw boundaries and prevent AI from being misused.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now

For this purpose there is currently no uniform “AI law”, therefore specifications can be found in very different places such as liability, copyright or ancillary copyright law. The General Equal Treatment Act (AGG) prohibits discrimination and includes decisions made by AI. There is also fundamental rights protection against certain types of use of AI. Since data forms the foundation for the learning processes of AI applications, data protection law plays a central role with the General Data Protection Regulation (GDPR).

Artificial intelligence data protection

The Data Protection Conference (DSK), the association of the German supervisory authorities, has therefore adopted the Hambach Declaration, in which overarching guidelines for the development and use of AI are given. Here, too, the prohibition of objectifying people forms the basis from which further goals can be derived. They include principles such as transparency, traceability and data minimization. Furthermore, the use of AI should only pursue constitutional purposes, discrimination should be avoided and the development and use of technical and organizational protective measures should be accompanied.

The legally binding requirements can be found in the GDPR. They must always be observed when using AI that works with personal data. In most cases this will be the case if only anonymized data is not processed. These are those in which the reference to a natural person has previously been completely removed. To train an AI, large amounts of historical data are usually required. The personal reference in the data pool is irrelevant for pattern recognition in most applications. However, since it will often be real historical data that relate to a specific person, there will be a personal reference at least initially.

The data protection requirements must therefore be observed up to an effective anonymization. If the AI ​​comes into contact with personal data again in productive operation, data protection law applies again. Then the data protection principles must be complied with: Above all, data processing must serve a legitimate purpose, be transparent and covered by a legal basis. In addition, appropriate technical and organizational measures must be taken and, in the case of automated individual decisions, separate rules must be observed.

A European legal framework for artificial intelligence

It should not stop there, however: The EU Commission has published a draft regulation to ensure uniform and binding regulation across Europe. The regulation would be the first law in the world to explicitly and exclusively deal with the subject of AI. The regulations would affect both developers and companies that use or import AI. With the proposal, the EU tries to balance the balance between strong promotion of AI technologies on the one hand and regulation to protect citizens and their trust in AI applications on the other.

What has so far been formulated primarily in non-binding guidelines is to become legally binding with the regulation – in particular the requirement that AI systems in the EU must operate safely, transparently, ethically and under human control. In order to achieve this goal, they are divided into different risk classes, ranging from minimal risk through low and high risk to unacceptable risk. Systems with the highest risk should generally not be allowed to be used. As examples, the EU Commission cites social scoring by authorities or toys that are provided with a voice assistant and encourage children to behave in a risky manner.

AI applications with minimal risk (which should make up a large part of all systems), on the other hand, should not be further regulated. They include, for example, spam filters. For systems with only a limited risk, such as the chatbots mentioned above, low transparency obligations should apply so that users can make an informed decision about whether they want to use these systems.

The actual regulation is intended for the high risk AI systems. Before placing them on the market, companies must ensure that compliance with all legal requirements is checked as part of a conformity assessment. For this, it is above all necessary that the data sets used are correct and complete and that the data processing is carefully documented. The applications must then be registered and given a CE mark. In addition, authorities should more closely monitor the AI ​​systems that are brought into circulation. High-risk systems include those that decide on access to educational and professional opportunities, are used in the context of law enforcement or the administration of justice, or make decisions in the context of central services such as lending.

Since AI applications have found diverse fields of application in the meantime and technical developments are advancing, separate and standardized regulations are generally to be welcomed. In its design, the planned EU regulation could ensure that the many potentials of artificial intelligence can be further exploited in the future, while at the same time the protection of citizens is strengthened. However, it is difficult to predict whether and when the regulation will come into force.

Negotiations first follow in the EU Parliament and in the European Council. But the proposal is already a clear sign of the direction in which the legal regulation of AI applications will go. And a high level of protection is also achieved with the existing legal requirements. The use and development of AI require clear processes and the use of data that has been carefully made usable through appropriate anonymization and a convincing data usage concept. Regulation should therefore not be perceived as an obstacle, but as an opportunity so that AI applications can continue to be used and further developed with a high level of confidence.

You might be interested in that too

Leave a Reply

Your email address will not be published. Required fields are marked *