Consensus on AI governance?

Consensus on AI governance?

by Martin Milán Csirszki

The first international declaration on artificial intelligence has been made at the AI Safety Summit on 1-2 November. The signatories include global superpowers, such as the United States, China and the European Union. According to experts, the Bletchley Declaration marks an important step[1] that reflects the international consensus about the risk-based approach towards one of the most powerful as well as of the most dangerous technologies nowadays. Although the European Union appears in the list of signatories, it is interesting to see that leading Member States, such as France and Germany, also signed the declaration separately. This seems to reflect these countries’ independence as regards AI governance, despite the fact that the raison d'être of the long-awaited EU regulation can be found, among others, in that „[n]ational approaches in addressing the problems will only create additional legal uncertainty and barriers, and will slow market uptake of AI.”[2]   

The declaration sheds light on crucial aspects of AI, however, it is formulated in a generalised way and it constitutes a political manifesto rather than a detailed and in-depth approach towards AI governance. The details that would give administratibility to the declaration’s principles are not to be looked for at this stage of international cooperation. Of course, the „big-tent-politics” nature of such a multi-actor cooperation that the declaration entails is clearly to be reflected in the text, but this comes with the danger that the initial enthusiasm will run out of steam as soon as the details should be discussed.

Bridging the differences will depend on whether these arise out of facts, interests or values. Further evidence can cure the diverse approaches coming from incomplete or contrasting information. In this regard, international research cooperation can serve as a decisive factor. The problems will more likely arise when the different actors have different interests, and this juncture seems unambiguous regarding the relationship between the three world powers – the EU, the US and China. Trade-offs through bargaining are necessary to resolve the conflicts of interests between these actors, and the current situation does not suggest easy pathways. Even more difficult when value judgments constitute the obstacles to a unified AI governance. This may more easily come to the fore between China and the Western countries. Nevertheless, „the narrative that the Chinese government disregards AI concerns is a simplification to the point of inaccuracy.”[3] This is represented also by the risk-based Bletchley Declaration, at least at the level of international politics.

According to the signatories, artificial intelligence offers immense global opportunities for human wellbeing, peace, and prosperity. However, it must be designed, developed, deployed, and used safely, human-centric, trustworthy, and responsible. The international community should cooperate on AI to promote inclusive economic growth, sustainable development, innovation, human rights, and public trust in AI systems. AI systems are already deployed in various domains, including housing, employment, transport, education, health, accessibility, and justice. As their use increases, it is crucial to act on the safe development of AI and its transformative potential for good and for all. However, AI also poses significant risks, including human rights, transparency, fairness, accountability, regulation, safety, human oversight, ethics, bias mitigation, privacy, and data protection. Safety risks arise at the frontier of AI, where highly capable general-purpose models can perform various tasks and exhibit harmful capabilities. Deepening our understanding of these potential risks and taking actions to address them is urgent due to the rapid and uncertain rate of change in AI.

It is worth emphasizing the importance of international cooperation to address the risks posed by AI, particularly in the context of frontier AI. They propose a pro-innovation, proportionate governance and regulatory approach that considers the risks associated with AI. They also highlight the need for cooperation on common principles and codes of conduct. The signatories emphasize the need for collaboration among nations, international fora, companies, civil society, and academia to ensure the safety of AI, as well as the responsibility of actors developing frontier AI capabilities, particularly those with powerful and potentially harmful systems. The agenda for addressing frontier AI risk will focus on identifying them, building risk-based policies, and supporting an internationally inclusive network of scientific research. This will help ensure the best science is available for policy making and the public good.

 

[1] Seán Ó hÉigeartaigh: Comment on the Bletchley Declaration. Available: https://www.cser.ac.uk/news/comment-bletchley-declaration/.

[2] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, 2.2. Subsidiarity.

[3] Adam Au: China vs US Approaches to AI Governance. Available: https://thediplomat.com/2023/10/china-vs-us-approaches-to-ai-governance/.