EU AI Act: European approach to regulating artificial intelligence

EU AI Act: European approach to regulating artificial intelligence

by Fatma Ceren Morbel

As technology advances and artificial intelligence becomes more prevalent in our lives, it is crucial to regulate this technology. A new set of Harmonized Rules on Artificial Intelligence has been proposed by the European Commission, which is considered to be one of the most important regulations pertaining to AI in the world. The purpose of this post is to analyze and emphasize the objectives of this regulation.

The “Artificial Intelligence Regulation” (EU AI Act) was proposed by the European Commission on April 21, 2021, with the aim of fostering the development of AI as an economic factor while ensuring the protection of fundamental rights. The proposal was discussed by 27 member states, and in December, the Council adopted its position on this issue and revised the original text in several critical areas.[1]

The regulation aims to mitigate the risks associated with AI systems, to build trust in AI systems in the EU, and to ensure that AI systems adhere to existing laws regarding fundamental rights and values of the European Union. The Regulation specifies four levels of risk for AI systems: minimal, limited, high, and unacceptable.[2]

During the consultation process, different amendments have been made to prioritize the fundamental rights of individuals and identify risks that may adversely affect those rights. However, the Czech Presidency has sought to achieve a balance between the protection of these fundamental rights and the promotion of artificial intelligence.[3] Furthermore, the definition of AI was narrowed to include only systems developed using machine learning approaches, as well as logic and knowledge-based approaches.[4] There was concern among several member states that a broader definition would include too many types of software.

AI practices prohibited by the text include the use of AI for social scoring by private actors. Furthermore, it is prohibited to utilize artificial intelligence systems to exploit the vulnerabilities of individuals who may be in a disadvantaged position due to their social or economic circumstances. It clarifies the objectives for which law enforcement authorities should be permitted to use 'real-time' remote biometric identification systems in publicly accessible spaces when such use is strictly necessary for law enforcement purposes.[5] The classification of AI systems as high risk is intended to prevent serious violations of fundamental rights.

National security, defense, and military purposes are specifically excluded from the scope of the AI Act. Neither the AI systems nor its outputs are subject to the AI Act when they are used for research and development purposes.

There have been three major modifications made to the text to ensure it meets the requirements for classification as a high-risk system, since it requires much stricter legal obligations. As a first step, the Czech Presidency has added an extra layer, which signifies that “the system should have a decisive weight in the decision-making process and not be purely accessory.”[6] In addition, the commission will define in detail what constitutes 'purely accessory' in the implementing act. Thirdly, deep fake detection by law enforcement, crime analytics, and verification of the authenticity of travel documents were removed from the list of high-risk systems. [7] For instance, deep fakes are considered to be limited risk applications of AI systems. The aim of this provision is to protect natural persons from the risks of impersonation or deception when artificial intelligence is used to create or manipulate images, audio, or video content that resembles existing individuals, places, or events and could appear to users of this system to be authentic or accurate.[8] Besides, the Council has added critical digital infrastructure as well as life and health insurance to the list of high-risk systems.

In relation to emotion recognition and deep fakes, the transparency requirements have been enhanced. It is the responsibility of the users of emotion recognition systems to inform natural persons that they are exposed to such technology. Moreover, anyone, whether natural or legal, can file a complaint with the relevant market surveillance authority in the event that the AI Act has not been complied with.[9]

The safety of ChatGPT has come under scrutiny lately, with some lawmakers pushing for it to be categorized as a high-risk AI system under the AI Act. However, Mark Brakel, policy director at the Future of Life Institute, a nonprofit focused on AI policy, has argued that simply labeling text-making systems as high-risk is not enough.[10] With the potential risks associated with AI systems, an open letter has been issued urging for an immediate six-month pause on the development of any AI systems more powerful than GPT-4. The goal is to ensure that any powerful AI systems developed in the future will have a positive impact and their risks can be managed effectively.[11]

Furthermore, the Center for AI and Digital Policy has suggested that laws should be enacted to promote algorithmic transparency and counter algorithmic bias. In a complaint to the Federal Trade Commission, the Center has called for an investigation into Open AI and ChatGPT, proposing a moratorium on the release of any future commercial versions of GPT until appropriate safeguards have been established. These measures aim to prevent any potential harm that AI systems could cause while allowing for their beneficial development in a safe and responsible manner.[12]

Upon the conclusion of negotiations between the European Council and the European Parliament, an agreement is expected to be reached.

 

 

[1] Matthias Monroy, „AI Act“: Germany in favour of facial recognition, against lie detectors, January 2023.

[2] Holistic AI, EU AI Act: Summary of Updates on Final Compromise Text, December 2022.

https://www.holisticai.com/blog/eu-ai-act-final-compromise-text

[3] Council of the EU, Press release, Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights, December 2022.

https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

[4] Marianna Drake, Jiayen Ong, Marty Hansen, Lisa Peets, EU AI Policy and Regulation: What to look out for in 2023, February 2023.

https://www.insideprivacy.com/artificial-intelligence/eu-ai-policy-and-regulation-what-to-look-out-for-in-2023/

[5] Council of the EU, Press release, Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights, December 2022.

https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

[6] Luca Bertuzzi, EU countries adopt a common position on Artificial Intelligence rulebook, January 2023.

https://www.euractiv.com/section/digital/news/eu-countries-adopt-a-common-position-on-artificial-intelligence-rulebook/

[7] Holistic AI, EU AI Act: Summary of Updates on Final Compromise Text, December 2022.

https://www.holisticai.com/blog/eu-ai-act-final-compromise-text

[8] Angelica Fernandez, Regulating Deep Fakes in the Proposed AI Act, March 2022.

https://www.medialaws.eu/regulating-deep-fakes-in-the-proposed-ai-act/

[9] Council of the EU, Press release, Artificial Intelligence Act: Council calls for promoting safe AI that respects fundamental rights, December 2022.

https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/

[10] Gian Volpicelli, ChatGPT broke the EU plan to regulate AI, March 2023.

https://www.politico.eu/article/eu-plan-regulate-chatgpt-openai-artificial-intelligence-act/

[11] https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[12] CAIDP FTC Complaint, In the Matter of Open AI, March 30, 2023.

https://www.caidp.org/cases/openai/