Common patterns in the diverse and evolving landscape of global AI regulations

Common patterns in the diverse and evolving landscape of global AI regulations

Summary

International efforts to regulate AI have intensified in recent weeks. The President of the USA issued an Executive Order on October 30, 2023, that may impact the competition for the primacy of artificial intelligence worldwide. On the same day that the Executive Order was issued, the Hiroshima Process Guiding Principles and Code of Conduct on AI were published by the G7.

Following the British Artificial Intelligence Security Summit on November 1, 29 countries have signed the Bletchley Declaration, including the EU, the USA, China, and even some EU member states. As a key motif of the Declaration's approach, safety is emphasized through the evaluation and management of risks. A similar security-focused approach can also be seen in the establishment of the Artificial Intelligence Security Institute in the UK. The Declaration emphasizes the importance of risk classifications (like the EU AIA and the Hiroshima Code of Conduct). As a comparison, the OECD AI Principles and the US Executive Order do not include this approach to risk categorization, and no international approach, except for the EU AIA, prohibits certain uses of artificial intelligence.

There are many aspects of the US Executive Order that are similar to the regulatory approach adopted during the Hiroshima Process, which is derived from the OECD AI Principles. As with the EU, the United States will develop standards and best practices for assessing the risks associated with artificial intelligence, which will require developers to meet different requirements worldwide. According to the US Executive Order, the US will create an extensive sector strategy and devise concrete measures to ensure that artificial intelligence technology developed in the USA does not enrich foreign interests while at the same time it absorbs as many foreign specialists as possible.

Hiroshima Artificial Intelligence Process

On October 30, 2023, the G7 published the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems [1] and the Hiroshima Process International Guiding Principles for Advanced AI Systems. [2] The G7 Hiroshima Artificial Intelligence Process was established at the G7 Summit on 19 May 2023. The Hiroshima AI Process seeks to complement ongoing discussions within a number of international forums, including the Organization for Economic Co-operation and Development (OECD) and the Global Partnership on Artificial Intelligence as well as the EU-U.S. Trade and Technology Council and the EUs Digital Partnerships with Japan, Korea and Singapore. These international commitments are consistent with EU AI Act currently being negotiated.[3]

OECD Recommendation on Artificial Intelligence

The Recommendation on Artificial Intelligence (AI), the first intergovernmental AI standard, was adopted by the OECD Council in May 2019.[4] The Recommendation includes five principles for trustworthy AI: inclusive growth, human-centred values, transparency, robustness, and accountability. It also provides five policy recommendations for trustworthy AI, including investing in AI research, fostering a digital ecosystem, shaping policy environment, building human capacity, and promoting international cooperation. It further includes a provision for developing metrics to measure AI development and assess progress in implementation.

International Guiding Principles for Advanced AI system build on the existing OECD AI Principles, meant to help seize the benefits and address the risks and challenges brought by these technologies and should apply to all AI actors (design, development, deployment). The eleven Guiding Principles include commitments to mitigate risks and misuse and identify vulnerabilities, to encourage responsible information sharing, reporting of incidents, and investment in cybersecurity as well as a labelling system to enable users to identify AI-generated content.  The Guiding Principles have in turn served as the basis to compile a Code of Conduct, which provides detailed and practical guidance for organisations developing AI.[5]

The Hiroshima Process International Code of Conduct for Advanced AI Systems expands upon the current OECD AI Principles and aligns with a risk-centered strategy.

The Code of Conduct advises organisations to implement the subsequent measures throughout the entire lifecycle:

          • establish internal structures and policies for governing AI, incorporating mechanisms for self-evaluation,
          • adhere to the principles of the legal system, protect human rights, ensure fair treatment, embrace diversity, promote democracy, and prioritize human well-being,
          • not to develop AI systems which compromise democratic principles, cause significant harm to individuals or communities, aid terrorism, encourage criminal exploitation, or present considerable threats to safety, security, and human rights (this can be regarded as risk classification)
          • implementing internal and external testing methods, such as red-teaming, within secure settings
          • enacting the allocation of resources towards the implementation of efficient strategies to counteract identified risks[6] and vulnerabilities;
          • reporting of issues and vulnerabilities after deployment
          • disclose to the public the capabilities, limitations, and areas where advanced AI systems are suitable or unsuitable for use, in a manner that is adequately clear. This disclosure should include pertinent technical documentation, such as evaluation details, outcomes of red-teaming exercises, and identified risks.
          • sharing information: evaluation reports, standard, best practices
          • implement and disclose AI governance and risk management policies
          • cyber/physical access controls
          • the implementation of watermarking enables users to identify and discern their interactions with an artificial intelligence (AI) system.
          • give priority to the advancement of sophisticated AI systems in order to tackle the most pressing global issues, particularly, but not exclusively, the climate emergency, global healthcare, and education.
          • implement appropriate safeguard including copyright-protected content.

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

On the same day when G7 Leaders published President Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

Several elements of the Order comply with international regulatory standards:

  • the Secretary of Commerce should establish guidelines and best practices for creating a resource for generative AI, incorporating secure development practices for generative AI and launching an initiative to create guidance for evaluating and auditing AI capabilities.
  • companies developing potential dual-use foundation models are required to provide the Federal Government with ongoing information and reports about their activities, and the results of any performance tests.
  • appropriate security guidance should be elaborated for critical infrastructure owners and operators.
  • The Secretary of Homeland Security is tasked to evaluate the potential misuse of AI in Chemical, Biological, Radiological, and Nuclear (CBRN) threats. Additionally, actions are proposed to reduce the misuse of synthetic nucleic acids, potentially amplified by AI.
  • The Secretary of Commerce develop standards and techniques for identifying and labeling synthetic content produced by AI systems.

The Secretary of Commerce is directed to propose regulations requiring Infrastructure as a Service (IaaS) Providers to submit reports when a foreign person transacts with them to train a large AI model with potential malicious capabilities. Order calls for the development of a National Security Memorandum on AI. The memorandum will address the governance of AI used for national security or military and intelligence purposes. It will also address the potential use of AI systems by adversaries and foreign actors that threaten U.S. security. Order highlights the need to streamline visa processing times and expand visa categories for non-citizens with expertise in AI or other emerging technologies.

The Order requires of development sectoral policies:

  • guidance addressing AI's role in the inventive process and patent eligibility.
  • support responsible AI development in healthcare.
  • improve electric grid infrastructure and mitigate climate change risks
  • support small businesses in AI innovation and commercialization.
  • develop principles and best practices for employers to mitigate AI's potential harms to employees' well-being and maximize its benefits.
  • report on the use of AI in the criminal justice system
  • encouraged to ensure fair competition and consumer protection in the AI marketplace
  • promoting competition in the semiconductor industry

The Order is enhancing national research capabilities with potential global impact. Key actions include launching a National AI Research Resource (NAIRR) pilot program to integrate computational, data, model, and training resources for AI research. Additionally, the Order mandates the creation of a pilot program to educate and train 500 fresh AI researchers by the year 2025.

The Bletchley Declaration

On November 1, the governments of several countries attending the UK AI Safety Summit 2023 signed the Bletchley Declaration, affirming their commitment to international cooperation with a view to identifying AI safety risks and the impact of AI on society, and building respective risk-based policies across the various countries.

The fundamental principle of the Declaration's approach is to prioritize safety by assessing and managing risks. This is also evident in the operations of the UK AI Safety Institute.

The Declaration acknowledges the possibility of unanticipated hazards arising from the ability to manipulate or produce misleading content. According to the Declaration, these concerns arise partly due to a lack of complete understanding and thus make it difficult to anticipate. The Declaration expresses particular apprehension about such risks in fields like cybersecurity and biotechnology, as well as in situations where advanced AI systems might magnify dangers such as disinformation.

In light of the fast-paced and unpredictable advancements in AI, coupled with the increasing investments in technology, the Declaration emphasizes the pressing need to enhance our comprehension of these potential risks and to formulate effective strategies to mitigate them.

The international nature of many risks associated with AI necessitates international collaboration as the most effective means of addressing them. The Declaration recognizes the importance of considering risk classifications (like EU AIA and Hiroshima Code of Conduct) and categorizations in accordance with national circumstances and relevant legal frameworks. In contrast, the OECD Principles and US Executive Order do not incorporate this approach of categorizing risks.

The declarations highlight the need for actors who are involved in the development of highly potent and potentially dangerous AI systems to bear a significant level of responsibility in ensuring the safety of these systems. This responsibility can be fulfilled through the implementation of safety testing protocols, evaluations, and other suitable measures.

 

[1] https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems

[2] https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system

[3] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_5379

[4] https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

[5] https://ec.europa.eu/commission/presscorner/detail/en/ip_23_5379

[6] Risks may encompass various aspects: AI systems have the potential to decrease obstacles for accessing weapons, such as chemical, biological, radiological, and nuclear arms, even for non-state entities; The existence of offensive cyber capabilities may expose vulnerabilities, but it is important to note that these capabilities can also be employed for defensive purposes; control critical infrastructure; discrimination, data protection; the promotion of disinformation, as well as the upholding of democratic principles and human rights.