One step forward? The proposal for AI Liability Directive

One step forward? The proposal for AI Liability Directive

by Rideg Gergely 

Artificial intelligence is a very exciting phenomenon of the 21st century. Mysticism and reality surround this digitization monster, which devours data, also known as the gold of our time, like a legendary beast.

The legislator tries to provide assistance for the coordination of technology, among other things, by trying to incorporate fundamental rights guarantees into the rules of development and use. There is no doubt that the European legislator has been at the forefront of this endeavour.

Upon request of the European Parliament, the European Commission published on 28 September 2022 a proposal for the Artificial Intelligence Liability Directive (AILD) that reflected the White Paper's objectives.

After the European legislator drafted a comprehensive regulatory material on artificial intelligence in a pioneering way, it has now drawn up rules in accordance with this, focusing on the issue of liability, in the form of directives.

Considering that the AILD operates with the Artificial Intelligence Act's conceptual definitions, it can be said that these form a coherent system for the regulation of modern technology.

As the legislator has been monitoring this issue for a long time, its impact assessment, consisting of approximately 230 pages, can be considered comprehensive. When evaluating directives, it is important to keep the following in mind:

A "directive" is a legislative act that sets out a goal that all EU countries must achieve. However, it is up to the individual countries to devise their own laws on how to reach these goals.”[1]

Since it is a technology that still has many questions, this form of regulation is in accordance with the principle of precaution.  It leaves the implementation of the directive's objectives to the wise and perhaps tentative discretion of Member States.

The draft directive provides that Member States shall bring into force the laws, regulations, and administrative provisions necessary to comply with this directive by [two years after entry into force] at the latest. We can say that the European legislator trusts the speed and preparedness of the Member States.[2]

The rules try to create greater security for the users of artificial intelligence through procedural rules and by establishing assumptions.

The new directive will guarantee that victims of harm caused by any AI technology can sue for damages in the same manner as if they were harmed under any other circumstances. This Directive has two primary measures: the so-called ‘presumption of causality’, which relieves victims of having to explain in detail how a certain fault or omission caused the damage, and the access to evidence from companies or suppliers in the context of working with AI that is considered to be high risk.

Regarding the presumption, I believe it is important to note that it introduces a rebuttable presumption (praesumptio iuris), and that when applying the presumptions, a distinction must be made between professional and non-professional users based on the provisions of the preamble part of the proposal. As an example, this applies to the assumption of a causal relationship.[3]

How does this policy help? In the current regulatory environment, the aggrieved party must be able to prove how the damage occurred, what the causal chain was, etc. This would be particularly difficult in situations with an AI element.[4]

One of the most important functions of civil liability rules is to ensure that victims of damage have opportunity to claim compensation. If the challenges of AI make it too difficult to access reparation, in that case, access to the truth is compromised. One should also bear in mind that by guaranteeing effective compensation, these rules contribute to the protection of the right to an effective remedy and a fair trial.[5]

The directive expands the regulatory envelope, as its guidelines would give both natural and legal persons the right to compensation, and both for the loss of life and damage to property.

Moreover, the alleviation of the victims' burden of proof by introducing the ‘presumption of causality' and the right to access to relevant evidence are applicable to any case where any type of AI systems (both high-risk and not high-risk) caused damages.

It is a testament to the complexity of the subject and the thoughtful regulation of the European legislator that the product liability directive is proposed to be amended simultaneously with the liability directive.

In reference to the above, the EU is developing both ex ante and ex post regulations and is seeking to create a framework for developers and users. In other words, it aims to create a balanced regulation, in which both producers and users bear equal risks.[6]

 

 

 

[1] https://european-union.europa.eu/institutions-law-budget/law/types-legislation_en

[2] AI Liability Directive, Article 7 - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496

[3] Questions & Answers: AI Liability Directive - https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5793 (downloaded: 11.10.2022)

[4] Response of the European Law Institute to the Public Consultation on Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence - https://www.degruyter.com/document/doi/10.1515/jetl-2022-0002/html (downloaded: 11.03.2022)

[5] Questions & Answers: AI Liability Directive - https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5793

[6] CERRE - EU liability rules in the age of Artificial Intelligence 18. p. (downloaded: 11.03.2022)