by István Kopácsi
The use of artificial intelligence (AI) in the judicial sector is becoming increasingly prevalent, with legal professionals around the world utilizing AI tools such as chatbots to assist with various tasks. However, there are concerns about potential risks and misuse of these tools, as seen in cases where AI chatbots have provided incorrect information in legal proceedings. To address these concerns, some countries have begun to publish principles and guidelines for the ethical and responsible use of AI in the justice system. The recent “Draft UNESCO Guidelines for the Use of AI Systems in Courts and Tribunals”[1] (draft Guidelines) published on August 2, 2024 provides a comprehensive set of principles and recommendations for the proper implementation and utilization of AI in the judiciary.
Examples of using AI in the judicial sector
The adoption of AI in the judicial sector is on the rise, with legal professionals across the world increasingly using AI tools such as chatbots powered by Large Language Models (LLMs) to draft legal documents, judicial decisions, and elaborate arguments in court hearings. AI systems can support pre-trial activities (e.g., automating the courts' filing system), court hearings (e.g., automatic translation), and post-sentencing proceedings (e.g., anonymizing court decisions).[2]
The Brazilian Supreme Court and the Supreme Court of India have implemented AI systems, VICTOR and SUVAS respectively, to aid in legal processes such as identifying applicable cases and translating documents. AI is being used to predict judicial decisions in the European Union and for tasks like contract review and case research in South Africa and Zimbabwe.[3]
While AI tools can support judicial objectives, there are risks of undermining human rights and judicial values if these tools are defective or misused. There have been instances in the United States, South Africa, and Brazil where legal professionals using AI chatbots have mistakenly cited non-existent legal rulings in their judicial decisions or legal filings.[4]
A few countries, including Australia, Brazil, Canada, New Zealand, and the United Kingdom, have begun to publish official principles or guidelines to ensure AI is used ethically and responsibly in the administration of justice.[5] The European Union's AI Act classifies certain AI systems used in the judiciary as “high risk”, necessitating risk management and human oversight. Despite this trend, there is a scarcity of formal guidelines on the proper use of AI in the justice system.
The draft Guidelines provides principles and recommendations for the judiciary to adopt AI responsibly, relevant to various legal professionals.
Principles
The draft Guidelines sets forth a series of thirteen principles that organizations and the judiciary ought to follow during the deployment of AI systems, encompassing generative AI tools.[6] These principles include safeguarding human rights (ensuring fairness, preventing discrimination, upholding procedural fairness, and protecting personal data), proportionality, safety, information security, promoting awareness and informed utilization, transparent utilization, accountability and auditability, explainability, accuracy and reliability, human oversight, human-centered design, responsibility, multi-stakeholder governance, and collaboration.
Recommendations for governing bodies of courts
There are recommendations apply to the bodies that govern the judiciary, courts, and tribunals that intend to adopt and use AI systems. The draft Guidelines emphasizes in this regard the importance of conducting algorithmic impact assessments before deploying AI systems, particularly in the context of decision-making processes that may affect human rights and access to justice. Additionally, it stresses the importance of algorithmic audits, proactive disclosure of information about AI systems, and conducting evaluations of their impact on users and society. Key points include the importance of human intervention and control throughout the implementation stages and usage of AI systems, the establishment of risk management systems to identify, monitor, and mitigate potential risks and harm, the adoption of cybersecurity-enhancing measures, the implementation of robust data governance frameworks to protect personal data, and the publication of impact evaluations and performance reports. Additionally, the draft Guidelines emphasizes the need for enhanced privacy protections, including data minimization, consent protocols, and data anonymization techniques, while balancing the right to access court documents and freedom of expression with data protection and privacy laws. The draft Guidelines highligths the importance of providing judiciary members with training opportunities to build AI literacy in order to critically assess the outputs of AI tools.
The use of generative AI has a dedicated section.[7] It is crucial for users to understand the possibility of biased or incorrect outputs when implementing these AI systems in a legal context. The draft Guidelines underlines the importance of maintaining the authenticity and integrity of AI-generated legal content within the judiciary. To ensure this, three key measures are proposed:
(i) AI-assisted legal documents and judicial opinions should be clearly marked as such, ensuring all parties are informed of their AI origin;
(ii) a robust tracking system should be in place to monitor the creation and alteration of AI-generated legal materials, which is crucial for ensuring their verifiability in court; and
(iii) AI tools used in legal contexts must adhere to certification protocols that affirm their compliance with ethical guidelines and legal standards of accuracy and reliability within the relevant jurisdiction.
The draft Guidelines advocates for the prohibition of AI in generating binding legal decisions or creating false evidence.
Guidance for individual members of the judiciary
The recommendations are applicable to individuals that are part of the judiciary, including magistrates, judges, justices, judicial officers, and judicial support staff. It suggests that individuals should be educated about AI systems' capabilities, limitations, biases, and associated risks, including legal liabilities from misuse.[8] The draft Guidelines advises against excessive dependence on AI systems for making important decisions in legal cases, particularly those affecting human rights.[9] It emphasizes the importance of adhering to the terms of use for AI systems[10] and the importance of data privacy when using generative AI systems, particularly those accessible to the public. It advises users to refrain from including personal or confidential information in prompts to avoid any potential data breaches.
Closing words
The draft Guidelines has been developed in collaboration with international experts, follow a recent UNESCO survey of judicial actors worldwide, which found a significant lack of institutional guidance and training on the use of AI systems. It covers various aspects such as safeguarding human rights, ensuring transparency and accountability, and promoting awareness and informed utilization of AI. It also offers recommendations for governing bodies and individual members of the judiciary to ensure responsible use of AI in the legal system.
The document is part of a public consultation in which UNESCO encourages stakeholders, including judicial practitioners, legal experts and the public, to provide feedback on the draft.
[1] https://unesdoc.unesco.org/ark:/48223/pf0000390781
[2] Draft Guidelines, 6.
[3] Draft Guidelines, 5.
[4] Ibid.
[5] Ibid.
[6] Ibid., 9-11.
[7] Ibid., 16.
[8] Ibid., 17.
[9] Ibid., 18.
[10] Ibid.