UN AI Advisory Body’s Interim Report: Governing AI for Humanity

UN AI Advisory Body’s Interim Report: Governing AI for Humanity

by István Kopácsi

In our previous summaries we have dealt a lot with the various regulations, statements and positions on artificial intelligence. Last year, we covered the AI Act proposal,[1] the Bletchley Declaration,[2] President Biden's Executive Order,[3] and drew parallels between them in a comparative paper, which also covered the OECD Recommendation.[4] From these, we found that, in principle, organisations and states have a similar vision of how AI should be regulated: for example, they all see it as a challenge to identify and manage emerging threats and risks, and to build trust in AI. The UN's Interim Report, described below, fits well into the system of initiatives and rules listed above, but is new in terms of plans and ideas for a global governance institution.

The UN advisory body issued its interim report[5] in December 2023. The purpose of the document was to signal that the UN was aware of developments in AI, its development, use and regulation around the world. It points out that, although there are several national or international regulations in the pipeline, there is currently no global legislation and no global enforcer, governance remains territorial and fragmented.

The document is effectively the UN's announcement of its role as the global governing body of AI, as the UN "lies at the heart of the rules-based international order. Its legitimacy comes from being a truly global forum founded on international law, in the service of peace and security, human rights, and sustainable development."

The advisory body explains the need for global governance by pointing out that although AI is transforming our world, its development and benefits are currently concentrated in a small number of private sector actors in a small number of states.There is a need for global governance with equal participation of all member states, to ensure that resources are accessible, that mechanisms for representation and oversight are broadly inclusive, accountability for harm is ensured, and that geopolitical competition does not lead to irresponsible AI and hinder good governance.

The interim report categorises the risks and says that while ensuring equal access to the opportunities created by AI, greater efforts should be made to address harms. Increasingly powerful systems are installed and used in the hope of making profit and money in the absence of new regulation, while AI systems can discriminate on the basis of race or gender, widespread use of AI systems can threaten linguistic diversity, new forms of disinformation and manipulation are endangering the political process, and cybersecurity and cyber defence are playing cat and mouse.

Challenges to be addressed include lack of transparency, access, computational and other resources, and understanding, which hinder the identification of where risks come from and where responsibility for managing them should lie. National approaches to regulation, which typically end at physical borders, can lead to tensions or conflicts. Identifying, avoiding and mitigating risks will require self-regulation, national regulation and international governance efforts. In addition to technical and political barriers, these challenges also exist in a broader societal context, as digital technologies affect the 'software' of societies. In addition to misuse, there are concerns about missed uses - over-caution leads to a failure to exploit and share the benefits of AI technologies.

Finally, the UN report makes preliminary recommendations on the principles and functions of international governance of AI, as existing efforts to govern AI have resulted in similar language, such as the importance of fairness, accountability and transparency, but there is no global alignment on implementation. The lack of common standards and benchmarks across national and multinational risk management frameworks, and the multiple definitions of AI, complicates the governance of AI, despite the need for different regulatory approaches to co-exist, reflecting the social and cultural diversity of the world.

The UN argues that the range of stakeholders and possible applications offered by AI and its use in a wide variety of contexts means that no existing governance model can be replicated exactly. Lessons can be drawn from examples of organisations that have sought to: (a) building scientific consensus on risks, impacts and policies (IPCC); (b) developing global standards (ICAO, ITU, IMO), iterating and adapting them; (c) capacity building, mutual assurance and monitoring (IAEA, ICAO); (d) networking and pooling research resources (CERN); (e) engaging different stakeholders (ILO, ICANN); (f) facilitating trade and managing systemic risks (SWIFT, FATF, FSB).

Guiding principles for AI governance:

1.: AI must be managed in an inclusive way, by everyone and for everyone. All citizens, including citizens of the Global South, should be able to create their own opportunities, harness them and achieve prosperity through AI.

2.: AI should be governed in the public interest. Governance efforts should keep in mind public policy goals related to diversity, equity, inclusion, sustainability, social and individual well-being, competitive markets and healthy innovation ecosystems.

3.: AI governance should be developed in conjunction with data management and the promotion of data privacy. Regulatory frameworks and technical-legal agreements should be developed that protect privacy and security of personal data in accordance with applicable law, while actively promoting the use of such data. The development of public data communities should also be encouraged.

4.: AI governance should be universal, networked and based on adaptive, multi-stakeholder collaboration. Any AI governance effort should prioritise the universal participation of different member states and stakeholders. This goes beyond inclusive participation, especially for previously excluded communities in the Global South, to lowering barriers to entry.

5.: The governance of AI should be based on the UN Charter, International Human Rights Law and other agreed international commitments, such as the Sustainable Development Goals.

Institutional functions:

1.: Systematic assessment of the future directions and impacts of AI. Currently, there is no credible institutionalised function for independent, inclusive, multidisciplinary assessments of the future trajectory and implications of AI. A global analytical observatory could coordinate research efforts on the critical societal impacts of AI, including impacts on labour, education, public health, peace and security, geopolitical stability.

2.: Strengthen the interoperability of emerging governance efforts worldwide and their grounding in international standards through a global AI governance framework agreed in a universal setting. Governance arrangements for AI should be interoperable across jurisdictions and based on international standards such as the Universal Declaration of Human Rights.

3.: Developing and harmonising standards, safety and risk management frameworks. For example, the emerging AI safety institutes could be networked to reduce the fragmentation of competing frameworks and standardisation practices across jurisdictions. New global standards and indicators could be defined to measure and monitor the environmental impact of AI and its consumption of energy and natural resources.

4.: Facilitate the development, deployment and use of AI for economic and societal benefit through international multi-stakeholder collaboration. Beyond standards to prevent harm and misuse, developers and users, especially in the Global South, need critical enablers such as standards for data tagging and testing, data protection and data exchange protocols that enable cross-border testing and deployment.

5.: Promoting international cooperation on talent development, access to compute infrastructure, the creation of diverse, high-quality datasets, the responsible sharing of open source models and public goods supported by AI for the Sustainable Development Goals. New mechanism (or mechanisms) is needed to facilitate access to data, computation and talent to develop, deploy and use AI systems for the SDGs. There would be a need to pool expertise and resources similar to those of CERN, EMLB or ITER and the IAEA's technology dissemination functions.

6.: Monitoring risks, reporting incidents, coordinating emergency responses. A techno-prudential model combining models developed at national level could help to similarly protect against AI risks to global stability. Such a model should be based on human rights principles. The reporting framework could be inspired by the IAEA's mutual assurance on nuclear safety and security and the WHO's existing practice on disease surveillance.

7.: Compliance and accountability based on standards. UN cannot rule out the need for legally binding standards and their enforcement at the global level. Non-binding standards, alone or in combination with binding standards, can also play an important role. The UN can help to ensure that there are no accountability gaps, for example by encouraging states to report analogous to reporting on the SDGs targets and the Universal Periodic Review which facilitates the monitoring, evaluation and reporting of human rights practices.

 

[1] http://www.ceuli.org/content/foundation-models-in-the-artificial-intelligence-act-proposal

[2] http://www.ceuli.org/content/consensus-on-ai-governance

[3] http://www.ceuli.org/content/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence

[4] http://www.ceuli.org/content/common-patterns-in-the-diverse-and-evolving-landscape-of-global-ai-regulations

[5] https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf