by Wasim Khraisha
Artificial Intelligence (AI) has revolutionized modern life, offering unprecedented opportunities across various domains, from healthcare to autonomous systems. However, the integration of AI into society has also introduced risks, including potential harm to individuals and entities due to its misuse, malfunction, or discriminatory practices. Recognizing the need for an effective framework to address these challenges, the European Union (EU) has developed legislative instruments to regulate AI liability. Among these, the proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (AILD) stands as a critical initiative. This blog post examines the comprehensive study (Study) conducted by the European Parliamentary Research Service (EPRS) on the AILD in September 2024,[1] analyzing its interplay with existing frameworks, addressing its methodological gaps, and presenting its broader implications.
The Current Landscape of AI Liability
AI liability in the EU is influenced by a spectrum of regulatory measures. Existing laws like the General Data Protection Regulation (GDPR) and non-discrimination directives apply to AI-related activities in general, as they are written in a technology-neutral manner,[2] but their scope remains inadequate for addressing unique AI-specific challenges. To bridge these gaps, the EU introduced two pivotal proposals in September 2022: the AILD and the revised Product Liability Directive (PLD). While the revised PLD governs tangible AI-enabled products, the AILD addresses procedural aspects of non-contractual liability for AI systems, aiming to harmonize rules across member states.
Globally, other jurisdictions have also advanced their AI liability frameworks. For instance, Canada’s Artificial Intelligence and Data Act (AIDA) emphasizes risk-based assessments, while California’s Senate Bill 1047 proposes strict liability for advanced AI systems. However, unlike these models, the EU focuses on a dual approach that combines direct regulation (via the AI Act) with liability frameworks (via the AILD and PLD), fostering a holistic strategy to balance innovation with accountability.[3]
Key Proposals of the AILD
The AILD introduces a range of mechanisms to address the complexity and opacity of AI systems. Its primary features include:
1. Evidence Disclosure Requirements: Injured parties are granted rights to access relevant evidence from AI developers and deployers, addressing the information asymmetry inherent in AI systems.[4]
2. Rebuttable Presumptions: The directive establishes presumptions of fault and causality when AI systems violate key provisions of the AI Act, such as human oversight requirements.[5]
3. Extension Beyond the PLD: Unlike the PLD, which primarily covers physical products, the AILD encompasses intangible harms like discrimination and violations of fundamental rights, ensuring broader protection.[6]
The Study underscores the need to expand the AILD's scope to include general-purpose and high-impact AI systems, such as generative AI models like ChatGPT. These systems pose significant risks, including potential infringement on personality rights and perpetuation of bias, warranting stricter liability rules.[7]
Challenges and Critique of the AILD Framework
While the AILD represents a significant step forward, the Study highlights several shortcomings in the European Commission's initial impact assessment (IA) of the directive. Two key issues stand out:[8]
1. Incomplete Policy Options: The IA overlooked critical alternatives, such as combining strict liability with liability caps or fully reversing the burden of proof. Moreover, it failed to consider transitioning the AILD into a broader software liability regulation, which could provide uniform standards for both AI and non-AI software systems.
2. Abridged Cost-Benefit Analysis: The evaluation of strict liability regimes lacked depth, focusing disproportionately on potential drawbacks, such as reduced investment in AI, while neglecting their benefits, including enhanced legal certainty and streamlined compensation processes.
The Study argues for a more balanced approach to assessing strict liability, noting that such regimes could incentivize safer AI design and foster public trust without stifling innovation. By leveraging mechanisms like liability caps and insurance requirements, the perceived risks of strict liability could be mitigated.[9]
Interplay Between the AILD, PLD, and AI Act
The Study delves into the relationship between the AILD, PLD, and the AI Act, highlighting their complementary roles in creating a cohesive regulatory environment. The AI Act provides a foundational framework for defining high-risk AI systems and establishing compliance requirements, while the AILD operationalizes these provisions by enabling affected parties to seek redress. However, gaps remain, particularly regarding discrimination, generative AI, and sustainability harms. The Study recommends extending the AILD to cover these areas comprehensively, ensuring no loopholes undermine its effectiveness.[10]
Toward a Unified Liability Framework
One of the Study's most significant proposals is transitioning from an AI-focused directive to a comprehensive software liability regulation. This shift would prevent market fragmentation and provide consistent legal standards across the EU. By encompassing both AI and traditional software, the regulation could address challenges common to digital technologies, such as proving causality and fault.
To implement this change, the Study outlines key steps, including stakeholder consultations, treaty-aligned legal assessments, and detailed implementation guidelines. It also emphasizes the importance of harmonizing definitions and concepts across legislative instruments to enhance clarity and coherence.[11]
Implications for Innovation and Society
The AILD and its associated proposals have far-reaching implications for innovation, consumer protection, and societal trust in AI. By addressing procedural barriers and creating clearer liability pathways, the directive could enhance accountability without deterring investment. Moreover, its focus on high-impact AI systems reflects an understanding of the evolving risks posed by advanced technologies, ensuring the regulatory framework remains adaptable to future developments.[12]
At the same time, the study cautions against potential pitfalls, such as overburdening small and medium-sized enterprises (SMEs) or stifling innovation through overly stringent regulations. Striking the right balance between protecting rights and fostering growth will be critical to the directive's success.[13]
Conclusion
The AILD represents a vital step in the EU's efforts to create a robust and balanced framework for AI liability. However, its status remains unresolved, as ongoing discussions and recent developments suggest potential revisions before its adoption. While its current form addresses many challenges, the Study underscores the need for further refinements, including a broader scope, stricter liability provisions, and a transition to a unified software liability regulation. By incorporating these recommendations, the EU can establish itself as a global leader in AI governance, setting standards that not only protect individuals and businesses but also promote ethical and sustainable technological development.
[1] https://www.europarl.europa.eu/RegData/etudes/STUD/2024/762861/EPRS_STU(2024)762861_EN.pdf
[2] Study page 1.
[3] Study pages 1-3.
[4] Study page 36.
[5] Study pages 16-18.
[6] Study pages 21-23.
[7] Study pages 16-18.
[8] Study page 6.
[9] Study pages 5-9.
[10] Study pages 9-19.
[11] Study pages 38-40.
[12] Study pages 2-4.
[13] Study pages 4-5.