NYC judge scolds lawyer for using ChatGPT to write brief full of ‘legal gibberish’; attorney swears robot ‘duped’ him

  • Oops!
    Something went wrong.
    Please try again later.

A Manhattan judge scolded a pair of personal injury lawyers on Thursday for using ChatGPT to whip up a legal brief chock-full of fake cases — with the lawyer who relied on the tech swearing it was a humiliating mistake and that he’d been “duped” by A.I.

Steven Schwartz, an attorney with the law firm Levidow, Levidow & Oberman, used the automatic search engine to generate arguments against Colombian airline Avianca in a March 1 filing.

In arguing that the court shouldn’t dismiss his client Roberto Mata’s pending lawsuit against the company, which demands compensation for an injury he suffered on a 2019 flight, Mata’s lawyers said his case was supported by precedent in Varghese v. China South Airlines Ltd, Petersen v. Iran Air. and several other cases.

It turns out, the cases didn’t even exist.

At a Manhattan federal court hearing to determine whether the attorneys behind the brief should face sanctions, an emotional Schwartz said artificial intelligence had pulled the wool over his eyes and that he thought he’d been using a “super search engine.”

“My assumption was I was using a search engine that was searching [sources] that I didn’t have access to,” he told the court, saying he was “extremely regretful” he hadn’t taken further steps to vet the information. Schwartz said he assumed the cases weren’t online, had been appealed or were otherwise hard to track down when he couldn’t find them anywhere but ChatGPT.

With some of the fake cases, it appeared Schwartz didn’t even read what the chatbot churned out.

“Can we agree that is legal gibberish?” an exasperated Judge Kevin Castel asked Schwartz about one of the cases.

“Looking at it now, yes,” Schwartz said.

Schwartz’s colleague Peter LoDuca, who signed and filed the phony filing, said he had not worked on the brief’s contents, nor had he known Schwartz used ChatGPT.

“It just never occurred to me it could be making up cases,” Schwartz said during the two-hour hearing that packed two courtrooms with dozens of lawyers, prosecutors and law school students. “I continued to be duped by ChatGPT.”

ChatGPT, emerging A.I. technology that some pundits say could be as revolutionary as the internet, operates like a chatbot. Users can ask it anything about anything, and it’ll generate a detailed answer in seconds by pulling from a mountain of online sources.

But as many early users have witnessed — and some students have learned the hard way — while the robot’s answers appear thoughtful and sophisticated, they are often jam-packed with confidently-stated inaccuracies.

Judge Castel said what happened after Schwartz cited fake cases citing fake cases was of more concern than the initial filing.

“I doubt we would be here today if the narrative ended there,” Castel said Thursday. “What happened thereafter is an important part, an essential part, of that narrative.”

When the airline’s lawyers said in April that they couldn’t find the cases Schwartz had cited, Castel told Mata’s legal team to submit the court records, prompting Schwartz’s firm to file copies of imaginary cases.

The airline’s lawyers replied, questioning their authenticity, with Schwartz soon after admitting in a filing that he had used ChatGPT’s assistance to assemble the brief, saying the A.I. lied to him when he asked if the cases were accurate.

“[I]s varghese a real case,” the lawyer asked the robot, a screenshot shows.

“Yes,” ChatGPT replied, telling him it was “a real case.”

On Thursday, Schwartz, who said his kids had told him about ChatGPT, said he still thought the cases were legitimate when he submitted the fake copies in April.

“I hate to keep saying the same thing, but it’s the truth, and I’m being completely transparent with the court in that I did not, could not, comprehend that ChatGPT could fabricate cases, so I complied with the court order and went back to the only place that I could find the cases,” Schwartz said, later adding he’d recently taken training on A.I.

“It became my last resort.”

The veteran attorney sounded on the verge of tears in pleading for the court’s forgiveness. He said he’d never come close to being sanctioned in his 30-year career.

“I deeply regret my actions in this matter that led to this hearing today,” Schwartz said, sounding torn.

“I have suffered both professionally and personally due to the widespread publicity this issue has generated. I am both embarrassed and humiliated and extremely remorseful. To say this has been a humbling experience would be an understatement.”

Judge Castel will issue a written decision on whether to sanction Schwartz, who had no comment leaving court.

A lawyer for Schwartz, Ronald Minkoff, told the judge his client didn’t typically work on federal cases and had made an honest mistake in a pinch.

“He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live ammo.”