Lawyer cites fake ChatGPT-generated cases in court, leaves court in utter disarray

Godfrey Elimian
Is ChatGPT a tool for deception and falsehood?
Lawyer cites fake ChatGPT-generated cases in court, leaves court in utter disarray
ChatGPT and the Law. Image: Penn Law School, University of Pennsylvania

While ChatGPT, the well-known AI language model, is undeniably a tremendously fascinating piece of technology, there are significant concerns about its suitability as a tool and source for trustworthy and accurate information and references.

The calls for it to be disregarded as a reliable resource for producing accurate references and citations for essay writing, real-life scenarios, and human perspectives on crucial and extremely important subjects have been supported by an even more recent occurrence.

A court in the U.S was thrown into a bizarre situation when a lawyer brought before it false cases as precedents to support his argument in a case in New York. Turns out, he had asked ChatGPT for examples of cases that supported the argument and ChatGPT, in its usual form, hallucinated wildly—it invented several supporting cases out of thin air.

When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.

As if that wasn’t enough, at some point, ChatGPT was asked again to confirm that the cases were real and ChatGPT said that they were. They included screenshots of this in another filing. The furious judge at this point was left bemused.

Read also: OpenAI founders worried “super-intelligent” AIs would destroy the world if not strictly regulated

Mata v. Avianca, Inc. (1:22-cv-01461) in details

The case was originally filed on Feb 22, 2022, and it entails a complaint about “personal injuries sustained on board an Avianca flight that was travelling from El Salvador to New York on August 27, 2019”. There’s a complexity here in that Avianca filed for Chapter 11 bankruptcy on May 10th, 2020, which is relevant to the case (they emerged from bankruptcy later on).

But various backs and forth took place over the next 12 months, many of them concerning if the bankruptcy “discharges all claims”. It was however on March 1st, 2023 that things got interesting, as per Simonwillison.

Lawyer cites fake ChatGPT-generated cases in court, leaves court in utter disarray

The airline, Avianca, asked the judge to dismiss the case. Mata’s legal team, in an effort to persuade the judge to let their client’s case proceed, put together a brief citing half a dozen similar cases that had been ruled on previously, the New York Times reported.

The problem was that the airline’s lawyers and the judge were unable to find any evidence of the cases mentioned in the brief. Why? Because ChatGPT had made them all up.

The writer of the plea, a highly skilled attorney, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, acknowledged in an affidavit that he had utilised OpenAI’s lauded ChatGPT chatbot to look for related cases, but claimed that it had “revealed itself to be unreliable.”

Schwartz told the judge he had not used ChatGPT before and “therefore was unaware of the possibility that its content could be false.”

When creating the brief, Schwartz even asked ChatGPT to confirm that the cases really happened. The ever-helpful chatbot replied in the affirmative, saying that information about them could be found on “reputable legal databases.”

The lawyer at the centre of the storm said he “greatly regrets” using ChatGPT to create the brief and insisted he would “never do so in the future without absolute verification of its authenticity.”

Looking at what he described as a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations,” and describing the situation as unprecedented, Judge Castel has ordered a hearing for early next month to consider possible penalties.

Read also: OpenAI to pay users up to $20000 for detecting ChatGPT bugs

Is ChatGPT a tool for deception and falsehood?

Chat-Generative Pre-Trained Transformer (ChatGPT) is a conversational chatbot based on Generative-Pre-Trained Transformer-3.5 (GPT-3.5), an LLM with over 175 billion parameters.

Its training data is drawn from a variety of online publications, including books, journals, and websites. ChatGPT may incorporate the complexity of users’ intentions by fine-tuning conversational tasks utilising reinforcement learning from human input. As a result, it can competently answer a variety of end-user tasks, possibly including related enquiries.

5 major trends that will define the African tech space in 2023

What this means is that, based on the contributions or enquiries from the user, ChatGPT can log in to the stored information it already has and fine-tune it to ensure it stays on the conversation being initiated by the said user even though it may not be totally accurate in that sense.

What you may not know about ChatGPT is that it has significant limitations as a reliable research assistant. 

One such limitation is that it has been known to fabricate or “hallucinate” (in machine learning terms) citations. These citations may sound legitimate and scholarly, but they are not real. It is important to note that AI can confidently generate responses without backing data much like a person under the influence of hallucinations can speak confidently without proper reasoning. If you try to find these sources through Google or the library—you will turn up with NOTHING. 

One time, I asked ChatGPT to give me the link for reference, and it generated a link which was associated with the popular news media platform, Reuters. But on searching for that same link on Google and on Reuters itself, I discovered it was non-existential.

Hence, while ChatGPT and other similar chatbots are outstanding in the way they produce a flowing language of high quality, they are also notorious for making up information and presenting it as true, as Schwartz discovered to his detriment. This is deception and falsehood.

The phenomenon is known as “hallucinating” though and is one of the biggest challenges facing the human developers behind the chatbots as they seek to iron out this very problematic crease.

Read also: OpenAI to launch ChatGPT Professional, a premium paid version of the chatbot


Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!

Register for Technext Coinference 2023, the Largest blockchain and DeFi Gathering in Africa.

Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!