New UK guidelines for judges using AI chatbots are a mess

The suggestions attempt to parse appropriate vs. inappropriate uses of LLMs like ChatGPT.
Gavel on top of a computer for a judge
“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts." DepositPhotos

Share

Slowly but surely, text generated by AI large language models (LLMs) are weaving their way into our everyday lives, now including legal rulings. New guidance released this week by the UK’s Judicial Office provides judges with some additional clarity on when exactly it’s acceptable or unacceptable to rely on these tools. The UK guidance advises judges against using the tools for generating new analyses. However, it allows summarizing texts. Meanwhile, an increasing number of lawyers and defendants in the US find themselves fined and sanctioned for sloppily introducing AI into their legal practices.

[ Related: “Radio host sues ChatGPT developer over allegedly libelous claims” ]

The Judicial Office’s AI guidance is a set of suggestions and recommendations intended to help judges and their clerks understand AI and its limits as the tech becomes more commonplace. These guidelines aren’t punishable rules of law but rather a “first step” in a series of efforts from the Judicial Office to clarify how judges can interact with the technology. 

In general, the new guidance says judges may find AI tools like OpenAI’s ChatGPT useful as a research tool summarizing large bodies of text or for administrative tasks like helping draft emails or memoranda. Simultaneously, it warned judges against using tools to conduct legal research  that relies on new information that can’t be independently verified. As for forming legal arguments, the guidance warns public AI chatbots simply “do not produce convincing analyses or reasoning.” Judges may find some benefits in using an AI chatbot to dig up material they already know to be accurate the guidance notes, but they should refrain from using the tools to conduct new research into topics they can’t verify themselves. It appears the guidance puts the responsibility on the user to tell fact from fiction in the LLMs outputs. 

“They [AI tools] may be best seen as a way of obtaining non-definitive confirmation of something, rather than providing immediately correct facts,” the guidance reads. 

The guidance goes on to warn judges that AI tools can spit out inaccurate, incomplete, or biased information–even if they are fed highly detailed or scrupulous prompts. These odd AI fabrications are generally referred toas “hallucinations.” Judges are similarly advised against entering any “private or confidential information” into the service because several of them are “open in nature.” 

“Any information that you input into a public AI chatbot should be seen as being published to all the world,” the guidance reads. 

Since the information spat up from a prompt is “non-definitive” and potentially inaccurate, while information fed into the LLM must not include “private” information that is potentially key to a full review of, say, a lawsuit’s text, it is not quite clear what actual use it would serve in the legal context. 

Context dependent data is also an area of concern for the Judicial Office. The most popular AI chatbots on market today, like OpenAI’s ChatGPT and Google’s Bard, were developed in the US and with a large corpus of US focused data. The guidance warns that emphasis on US training data could give AI models a “view” of the law that’s skewed towards American legal contexts and theory. Still, at the end of the day, the guidance notes, judges are still the ones held responsible for material produced in their name, even if it was done so with the assistance of an AI tool. 

Geoffrey Vos, the Head of Civil Justice in England and Wales, reportedly told Reuters ahead of the guidance reveal that he believes AI “provides great opportunities for the justice system.” He went on to say he believed judges were capable of spotting legal arguments crafted using AI.

“Judges are trained to decide what is true and what is false and they are going to have to do that in the modern world of AI just as much as they had to do that before,” Vos said according to Reuters. 

Some judges already find AI ‘jolly useful’ despite accuracy concerns

The new guidance comes three months after a UK court of appeal judge Lord Justice Birss used ChatGPT to provide a summary of an area of law and then used part of that summary to write a verdict. The judge reportedly hailed the ChatGPT as “jolly useful,” at the time according to The Guardian. Speaking at a press conference earlier this year, Birss said he should still ultimately be held accountable for the judgment’s content even if it was created with the help of an AI tool. 

“I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else,” Birss said according to The Law Gazette. “All it did was a task which I was about to do and which I knew the answer and could recognise as being acceptable.” 

A lack of clear rules clarifying when and how AI tools can be used in legal filings has already landed some lawyers and defendants in hot water. Earlier this year, a pair of US lawyers were fined $5,000 after they submitted a court filing that contained fake citations generated by ChatGPT. More recently, a UK woman was also reportedly caught using an AI chatbot to defend herself in a tax case. She ended up losing her case on appeal after it was discovered case law she had submitted included fabricated details hallucinated by the AI model. OpenAI was even the target of a libel suit earlier this year after ChatGPT allegedly authoritatively named a radio show host as the defendant in an embezzlement case that he had nothing to do with. 

[ Related: “EU’s powerful AI Act is here. But is it too late?” ] 

The murkiness of AI in legal proceedings might get worse before it gets better. Though the Biden Administration has offered proposals governing the deployment of AI in the legal settings as part of his recent AI Executive Order, Congress still hasn’t managed to pass any comprehensive legislation setting clear rules. On the other side of the Atlantic, The European Union recently agreed on its own AI Act which introduces stricter safety and transparency rules for a wide range of AI tools and applications that are deemed “high risk.” But the actual penalties for violating those rules likely won’t see the light of day until 2025 at the earliest. So, for now, judges and lawyers are largely flying by the seat of their pants when it comes to sussing out the ethical boundaries of AI use.