The Risks of AI-Generated Fake Laws in the Legal System

The Risks of AI-Generated Fake Laws in the Legal System

AI has permeated many aspects of our lives, from creating deepfake videos to driving race cars. The legal system is no exception to the impact of AI, and it is raising concerns about the use of fake laws generated by AI in legal disputes. Not only does this pose legal and ethical issues, but it also puts the faith and trust in our legal systems at risk.

Generative AI, a powerful tool with transformative potential for society, including the legal system, is capable of creating new content based on massive data sets. However, the content generated by AI can be inaccurate due to the model’s attempt to “fill in the gaps” when the training data is inadequate or flawed. This phenomenon, known as “hallucination,” becomes problematic when AI-generated content is used in legal processes, especially when combined with time pressures on lawyers and limited access to legal services.

There have already been notable cases of AI-generated fake laws entering the legal system. In a 2023 US case called Mata v Avianca, lawyers submitted a brief to a New York court containing fake extracts and case citations. The research for the brief was conducted using ChatGPT, an AI model. Unaware that ChatGPT can hallucinate, the lawyers failed to verify the existence of the cited cases. The consequences were severe, with the court dismissing their client’s case, imposing sanctions on the lawyers for acting in bad faith, and exposing their actions to public scrutiny.

Similar fake case examples involving generative AI have emerged in Canada, the United Kingdom, and even involving high-profile figures like Michael Cohen, Donald Trump’s former lawyer. If the misuse of generative AI by legal professionals goes unchecked, it has the potential to undermine the public’s trust in the legal system, hinder court proceedings, harm clients' interests, and erode the rule of law.

Recognizing the urgency of this issue, legal regulators and courts around the world have responded in various ways. Some US state bars and courts have issued guidance, opinions, or orders on the use of generative AI, ranging from responsible adoption to an outright ban. Law societies in the UK, British Columbia, and New Zealand have also developed guidelines. In Australia, the legal profession has taken steps to address this issue, with the NSW Bar Association publishing a generative AI guide for barristers and the Law Society of NSW and the Law Institute of Victoria releasing articles on responsible use.

While guidance undoubtedly helps, a mandatory approach is necessary. Legal professionals must not treat generative AI tools as a substitute for their own judgment and diligence. They need to verify the accuracy and reliability of the information generated by AI. Australian courts should adopt practice notes or rules that set expectations for the use of generative AI in litigation. This would provide guidance not only to lawyers but also to self-represented litigants and signal to the public that the courts are aware of the problem and are addressing it. Additionally, the legal profession should establish formal guidance to promote the responsible use of AI by lawyers, and technology competence should become a requirement for lawyers' continuing legal education.

Taking these measures in Australia will encourage the responsible and ethical use of generative AI by lawyers, reinforcing public confidence in the legal system and the administration of justice in the country. Fake laws generated by AI pose significant challenges, but with proper regulations and guidance, we can mitigate the risks and ensure the integrity of our legal systems. As technology advances, our legal systems must adapt and evolve to maintain their effectiveness and trustworthiness in an AI-driven world.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.