by István Kopácsi
Since 2016, more than thirty countries have enacted laws specifically dealing with artificial intelligence (AI), and the debate on AI legislation has intensified worldwide.
In August 2024, UNESCO published an interesting consultation paper (Paper) on AI to better understand the current global AI governance environment[1] and mapped nine different regulatory approaches for AI. However, these do not appear to be approaches, but rather regulatory methods, because the Paper itself admits that e.g. several of these are combined within the EU’s AI Act (AI Act).
The nine approaches below are not mutually exclusive and can be used in different combinations.[2]
In the Principles-based Approach, the principles themselves cover a range of ethical considerations such as proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, responsibility, literacy, and multi-stakeholder governance. The principles-based approach does not prescribe specific obligations or penalties for noncompliance, leaving it to entities to interpret and apply these principles. However, it can be used in conjunction with other regulatory approaches that define explicit obligations and rights, guiding the interpretation and enforcement of mandatory rules.[3] The UK's "pro-innovation to AI regulation" proposal exemplifies this combined approach, where regulators are expected to consider these principles within the existing legal framework.
The next approach, known as the Standards-Based Approach, involves standard-setting bodies playing a central role in developing technical standards for AI systems.These standards are intended to ensure compliance with mandatory regulations and promote innovation, competitiveness, and growth, particularly in the EU's single market.[4] Article 40 of the AI Act specifies that high-risk AI systems that comply with harmonized standards are presumed to conform to regulatory requirements, which include risk management, data quality, traceability, transparency, human oversight and cybersecurity.
The Agile and Experimentalist Approaches, such as "regulatory sandboxes" are being employed across various economic sectors and for transversal legislation to foster innovation by allowing public and private entities to test new business models and technologies under flexible conditions with regulatory oversight.[5] The AI Act is one such initiative that outlines a framework for AI regulatory sandboxes. Similarly, the UK's pro-innovation proposal includes sandboxes to accelerate market entry for novel products, identify regulatory barriers, and adapt to technology and market trends.[6]
The Facilitating and Enabling Approach encourages all stakeholders involved in the AI lifecycle to develop and use responsible, ethical, and human rights-compliant AI systems. UNESCO's Readiness Assessment Methodology (RAM) is a tool to help countries evaluate their preparedness for ethical AI implementation.[7] It assesses five dimensions: legal, social and cultural, scientific and educational, economic, and technological and infrastructural. In the United States, several bills aim to foster AI development and use:[8]
- The "AI Leadership Training Act" focuses on AI literacy for federal employees, promoting ethical AI use in government.
- The "CREATE AI Act" seeks to establish the National Artificial Intelligence Research Resource (NAIRR) for improving access to resources for AI researchers and students.
- The "Jobs of the Future Act" supports research into AI's potential impact on industries and employment.
The Adapting Existing Laws Approach includes amending sector-specific rules (e.g., health, finance, education, justice) and transversal rules (e.g., criminal codes, public procurement, data protection laws, labor laws) to make incremental improvements to the existing regulatory framework. The EU's GDPR is an example, specifically Article 22, which protects individuals from decisions made solely by automated processing.[9]
According to the Access to Information and Transparency Mandates Approach, algorithmic transparency is a critical principle in AI ethics. France's Law N° 2016-1321 mandates public bodies to disclose the main algorithmic processes used in decision-making.[10] The AI Act, particularly Article 50, outlines transparency obligations for AI system providers, including the need to inform users when they interact with AI and to mark AI-generated or manipulated content.
The Risk-Based Approach involves establishing obligations and requirements in accordance with an assessment of the risks associated with the deployment and use of certain AI tools in specific contexts. The AI Act categorizes AI practices based on different levels of risk: unacceptable, high, systemic, limited, and minimal. It defines "risk" as the probability and severity of harm. AI practices deemed "unacceptable risk" are prohibited, such as the use of real-time remote biometric identification systems in public spaces for law enforcement, except in specific criminal investigations or threat prevention. AI systems categorized as "high-risk" must adhere to specific duties related to risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.[11]
The Rights-based Approach, unlike market-failure-centric rationales, justifies regulation on the basis of protecting rights, promoting equitable distribution, and advancing societal goals.[12]
The Liability Approach assigns responsibility and sanctions to problematic uses of AI systems. The AI Act exemplifies this with substantial penalties for non-compliance, including fines up to €35 million or 7% of an entity's global annual turnover for prohibited actions, and fines up to €15 million or 3% of turnover for other obligations breaches. The European Parliament is considering additional civil liability rules for damages caused by AI.[13]
The Paper poses a guiding question about the key justifications for regulation, which are categorized into three main reasons: addressing public problems, protecting human rights, and achieving desirable futures.[14] In order to determine the appropriateness and feasibility of implementing regulation the key steps include: establishing a consensus on the need for regulation, comparing regulatory instruments with other policy tools, and assessing the feasibility of implementing regulation.[15]
It emphasizes the importance of translating global standards into governance approaches at the national level. The Paper does not endorse a specific regulatory approach but provides specific cases from different countries to illustrate each approach. It recommends crafting context-specific rules to address each nation's unique needs and challenges, considering issues such as human rights, digital divides, and environmental sustainability.[16]
[1] https://www.unesco.org/en/articles/unesco-launches-open-consultation-inform-ai-governance
[2] Paper, 21.
[3] Ibid., 23.
[4] Ibid., 24.
[5] Ibid., 25.
[6] Ibid., 27.
[7] Ibid., 28.
[8] Ibid.
[9] Ibid., 30.
[10] Ibid., 32.
[11] Ibid., 36.
[12] Ibid., 37.
[13] Ibid., 39.
[14] Ibid., 41.
[15] Ibid., 42.
[16] Ibid., 49.