(quote)

A lawyer used ChatGPT and now has to answer for its ‘bogus’ citations
A filing in a case against Colombian airline Avianca cited six cases that don’t exist, but a lawyer working for the plaintiff told the court ChatGPT said they were real.

Lawyers suing the Colombian airline Avianca submitted a brief full of previous cases that were just made up by ChatGPT, The New York Times reported. After opposing counsel pointed out the nonexistent cases, US District Judge Kevin Castel confirmed, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and set up a hearing as he considers sanctions for the plaintiff’s lawyers.

Lawyer Steven A. Schwartz admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing: he asked the chatbot if it was lying.

New York CNN — The meteoric rise of ChatGPT is shaking up multiple industries – including law, as one attorney recently found out.
“The court is presented with an unprecedented circumstance,” Judge Kevin Castel of the Southern District of New York wrote in a May 4 order. Among the purported cases: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, all of which did not appear to exist to either the judge or defense, the filing said.

The danger of AI may not be in a technology that develops a will of its own. The real danger, it would seem, is that humans will simply believe anything the machines say, no matter how wrong. ChatGPT doesn’t know it’s telling you inaccurate information. So it’s on us to check facts and care about getting things right.
(unquote)

Image courtesy SOPA Images / LightRocket via Getty Images

Leave a Reply

Your email address will not be published. Required fields are marked *

75 − = 71