Terminology of Artificial Intelligence Related Harms and Discovered Hazards

Terminology of Artificial Intelligence Related Harms and Discovered Hazards

by Dávid Szász

Artificial Intelligence (AI) has undoubtedly been one of the hot topics of academic and regulatory discussions in the last few years. The phrase “AI holds great possibilities, but also great risks” has been repeated several times over.

The Central European Lawyers Initiative (CELI) has explored the possible risks stemming from the development and widespread use of AI. The dangers presented by the emergence of AI systems are numerous, and they concern human rights and democracy,[1] competition[2] and IP law[3], consumer protection[4] and so on. Several nations and organizations have set up special working groups tasked with exploring the risks of AI in order to regulate this new technology successfully, [5] but the work just begins.

The question arises: what is risk and what is harm in the context of AI, and what kind of risks and harms can we pinpoint right now?

To the first question, we can only give proper answer through regulation and legislation. In order to regulate, we first need to set a clear terminology. With this task in mind has the paper titled Defining AI incidents and related terms (Paper) been published by the Organisation for Economic Co-operation and Development (OECD).[6] The Paper aims to set a clear terminology for AI related harms by defining (in order of severity) AI hazards, serious AI hazards, AI incidents, serious AI incidents, and AI disasters. As the Paper pinpoints these definitions are essential in a way that AI actors and regulators need to use the same terms to talk about the problems and failures of AI systems.

The main difference between hazard and incident is the potentiality of harm and the materialization of harm. Potential harm is basically the risk that harm or damage will occur, while actual harm is a risk that materialized into harm. Therefore, an event where the development or use of an AI system already resulted in actual harm is termed an AI incident, while an event where the development or use of an AI system is potentially harmful is termed an AI hazard.[7] Events of the past, that could have possibly led to an AI incident, but this did not happen, will fall under the term AI hazard.

Under “following harms” the definition gives a comprehensive list of possible harms currently known to us, namely: injury or harm to the health of a person or groups of people; disruption of the management and operation of critical infrastructure; violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labor, and intellectual property rights; harm to property, communities, or the environment.

Based on the severity of these potential or actual harms, the Paper makes a distinction between serious AI hazards, serious AI incidents and the most severe harm, an AI disaster. Serious AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to a serious AI incident or AI disaster, while a serious AI incident directly or indirectly leads to any of the following harms: the death of a person or serious harm to the health of a person or groups of people; a serious and irreversible disruption of the management and operation of critical infrastructure; a serious violation of human rights or a serious breach of obligations under the applicable law intended to protect fundamental, labor and intellectual property rights; serious harm to property, communities or the environment. As a matter of curiosity, the definition of a serious AI incident aligns with the definition in the EU AI Act. The most serious form of AI related harm proposed by the Paper is an AI disaster, which is a serious AI incident that disrupts the functioning of a community or a society and that may test or exceed its capacity to cope, using its own resources. The effect of an AI disaster can be immediate and localized, or widespread and lasting for a long period of time.

The working group responsible for the Paper will now work on identifying the key types of harm (for example: physical, environmental, economic and reputational harm, harm to public interest and harm to fundamental rights) and assessing the dimensions of it based on severity, scope, geographic scale, tangibility, quantifiability, materialization, reversibility, recurrence, impact and timeframe.

One type of harm, namely the harm to human rights and to fundamental rights has been highlighted since the emergence of AI and been thoroughly discussed during the academic and regulatory discussions and on the platforms of social media. Several ways have already been identified and showcased as how AI can negatively impact human rights. A report recently published by the United Nations (UN) titled Taxonomy of Human Rights Risks Connected to Generative AI (Report) focuses exactly on this showcasing, all the while following a rights-based taxonomy.[8]

The Report examines human rights that may be adversely impacted by generative AI, providing real world examples for each right. Human rights are interrelated and interdependent, which means a single use of generative AI may place several fundamental rights at risk. The risks established in the Report however are not exhaustive as the technology will expand, and will be better understood, additional risks to human rights will inevitably appear. The rights that are took under examination in the report are the following: Freedom from Physical and Psychological Harm; Right to Equality Before the Law and to Protection against Discrimination; Right to Privacy; Right to Own Property; Freedom of Thought, Religion, Conscience and Opinion; Freedom of Expression and Access to Information; Right to Take Part in Public Affairs; Right to Work and to Gain a Living.

In relation to the Freedom from Physical and Psychological Harm the greatest risk is disinformation and misinformation and the consequences of it. As disinformation created with AI may be used in ways that risk inciting physical violence against individuals or groups, while misinformation may encourage users to take actions that put their own physical or psychological well-being at risk. AI image and video generators may also be used to create non-consensual sexualized content.

The Right to Equality Before the Law and to Protection against Discrimination might also be at stake as generative AI models may produce derogatory outputs pertaining to people with marginalized identities, amplifying false and harmful stereotypes and facilitating various forms of discrimination. AI technologies used for automated decision-making may also be biased on fault and facilitate discrimination.

The Right to Privacy and the Right to Own Property is and will also be in danger as a result of the large-scale collection, storage and processing of data, as users may input private or sensitive information into AI model prompts without understanding how their data will be processed. These models may also facilitate user exposure to data breaches and hacks. The processing of data by generative AI models may also involve the use of protected works, adversely impacting authors right to own property.

Regarding the Freedom of Thought, Conscience, and Expression and the Right to Take Part in Public Affairs false information presents a gargantuan threat. Fake news and information have already gone rampant in the last decade, even before the emergence of AI. AI can now serve as a tool to create and spread false information more effectively. False information created with generative AI may manipulate the beliefs of the population about politics, democratic processes, religion, and science leading to an erosion of public trust in news media, political processes and democratic governance.

As for the Right to Work and to Gain a Living companies may replace workers with AI tools or pause hiring for roles that may be performed by AI. AI can also be used by companies to monitor employee performance.

As we can see from the possible risks emerging, AI will undoubtedly change every aspect of our life. Our obligation is to make sure it will change it for the better, and not for the worse. To ensure this we need transparent and flexible regulation, that can keep up with this ever-developing technology, and will not become a living fossil soon after it enters into force. In order to close all possible loopholes, regulators have to keep working on the best possible terminology in relation to AI and liability for AI, meanwhile we have to continue investigating the risks AI may pose to us and to future generations.

 

[1] See: Dávid Szász: AI as an imminent danger to human rights and democracy: The steps taken by the Council of Europe, 2024., Available at: https://ceuli.org/content/ai-as-an-imminent-danger-to-human-rights-and-democracy-the-steps-taken-by-the-council-of-europe (last accessed: 2024.07.31.).

[2] See: István Kopácsi: Navigating Competition in Generative AI: A Global Collaborative Effort, 2024., Available at: https://ceuli.org/content/navigating-competition-in-generative-ai-a-global-collaborative-effort (last accessed: 2024.07.31.).

[3] See: Mónika Mercz: How will AI shape IP law? 2024., Available at: https://ceuli.org/content/how-will-ai-shape-ip-law (last accessed: 2024.07.31.).

[4] See: CELI: Consumer Protection in the Age of Artificial Intelligence., 2024., Available at: https://ceuli.org/content/consumer-protection-in-the-age-of-artificial-intelligence (last accessed: 2024.07.31.).

[5] See: Gergely Rideg: Humanity has reached a global milestone; the UN resolution on artificial intelligence, 2024., Available at: https://ceuli.org/content/humanity-has-reached-a-global-milestone-the-un-resolution-on-artificial-intelligence (last accessed: 2024.07.31.).

[6] See: OECD: Defining AI incidents and related terms, OECD Artificial Intelligence Papers, No. 16., 2024., Available at: https://www.oecd-ilibrary.org/science-and-technology/defining-ai-incidents-and-related-terms_d1a8d965-en (last accessed: 2024.07.31.).

[7] As the definition given by the Paper stands “AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident”, while an “AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms”.

[8] See: UN: Taxonomy of Human Rights Risks Connected to Generative AI, Supplement to B-Tech’s Foundational Paper on the Responsible Development and Deployment of Generative AI, 2024., Available at: https://www.ohchr.org/sites/default/files/documents/issues/business/b-tech/taxonomy-GenAI-Human-Rights-Harms.pdf (last accessed: 2024.07.31.).