EDPS opinion addresses AIA exemptions

EDPS opinion addresses AIA exemptions

by Dávid Szász

On 21 April 2021, the European Commission introduced a proposal known as the AI Act, which aims to establish consistent regulations for artificial intelligence. Following this, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) jointly provided their assessment of the proposal in June 2021.

On 23 October 2023, the EDPS independently issued an opinion that focused on its institutional, legal, and technical responsibilities within the AI Act.[1] This opinion took into account recent legislative advancements and negotiation mandates from the Council and the European Parliament. This blog post delves into the recommendations put forth by the EDPS regarding the AI Act and their potential contribution to ensuring the safety and security of present-day AI systems.

The EDPS expresses its endorsement of individuals' entitlement to file grievances against AI systems and advocates for their explicit inclusion as the designated entity responsible for addressing such grievances.

The EDPS suggests that the AI Act should encompass current high-risk AI systems when it becomes applicable. Consequently, the EDPS proposes eliminating the exemption outlined in Article 83(2) of the Proposal, which excludes existing high-risk AI systems from the scope of the AI Act.

The AI Act's exclusion of AI systems within EU large-scale IT systems is a matter of concern for the European Data Protection Supervisor (EDPS). These systems handle substantial amounts of personal and sensitive data and have the potential to be supported by AI applications. Additionally, the EDPS raises concerns regarding the exclusion of AI systems used in international law enforcement cooperation from the AI Act, as this omission may result in affected individuals being deprived of the legal protections provided by the Act. The EDPS recommends the removal of this exclusion. Furthermore, the EDPS argues against the utilization of AI systems by law enforcement for unproven or invasive purposes, such as polygraphs or individual risk assessments. They assert that these applications should be prohibited due to their infringement upon human dignity and contradiction with the values upheld by the European Union.

The proposed exception in Article 6(3) of the AI Act, which permits AI systems to be classified as non-high-risk if their output is merely an accessory, is a cause for concern according to the European Data Protection Supervisor (EDPS). The EDPS also criticizes the European Parliament's directive that allows providers to exclude certain AI systems from the high-risk category if they do not pose a significant threat to health, safety, or fundamental rights, arguing that this could undermine the safeguards for high-risk systems. The EDPS proposes that high-risk AI systems should undergo certification to ensure compliance with data protection principles, and that legal compliance should be a prerequisite for obtaining the CE marking for the AI system. Additionally, the EDPS recommends conducting an ex-ante third-party assessment for high-risk AI systems and involving an oversight authority with specialized expertise in the field.

 

[1] https://edps.europa.eu/press-publications/press-news/press-releases/2023/edps-final-recommendations-ai-act_en