Strictly Legal | Artificial intelligence on trial

Jack Greiner, partner of Faruki PLL
Jack Greiner, partner of Faruki PLL

A Colorado lawyer is the latest attorney to get burned for using artificial intelligence to draft papers for him.  His case and others like it have led to one area judge banning the use of AI altogether in cases before him.

Zachariah Crabill is the Colorado lawyer in question.  In April of this year, a client hired Crabill to prepare a motion to set aside a judgment in the client’s civil suit.  Crabill had never handled such a matter, so he used the AI platform ChatGPT to prepare the appropriate papers.  ChatGPT included in its work product citations to cases that were made up.  Crabill did not read through the cases or otherwise verify they were real before filing the ChatGPT work product with the court.

Putting aside momentarily the fact that Crabill failed to check that his citations were real, it is worth considering how ChatGPT works to better understand why it spits out phony citations.  As I understand it, ChatGPT takes a prompt and creates words one at a time (albeit at an incomprehensible rate of speed).   The process is referred to as “autoregressive modeling,” where the model predicts the next word in the sequence based on the words that came before it.

But what is logical for artificial intelligence is not necessarily true.

So, in modeling briefs that came before it, ChatGPT understands that typically there are words like “Smith v. Jones” at the end of a long sentence or paragraph.  But those are only words, not necessarily real case citations.  Once we understand that, we can see how the phony citations get generated.  But it doesn’t explain Crabill’s failure to check.

To make matters worse, Crabill discovered the problem before a hearing on the motion, but didn’t inform the court or otherwise rectify the situation.  When the judge inquired about the citations, Crabill blamed it on an intern.  Crabill finally came clean six days after the hearing when he filed an affidavit admitting he used ChatGPT to draft the motion.

Colorado suspended Crabill for one year and one day, with ninety days to be served and the remainder to be stayed upon Crabill’s successful completion of a two year period of probation, with conditions.  All in all, Crabill should have listened to the lesson the nuns at St. Martins drilled in my head – always check your work.  Or in this case, I guess ChatGPT’s.

Closer to home, Judge Michael Newman, a judge in the United States District Court for the Southern District of Ohio, has instituted a standing order that provides:

“No attorney for a party, or a pro se party, may use Artificial Intelligence (“AI”) in the preparation of any filing submitted to the Court. Parties and their counsel who violate this AI ban may face sanctions including, inter alia, striking the pleading from the record, the imposition of economic sanctions or contempt, and dismissal of the lawsuit.”

An overreaction? Perhaps, but it’s not an unreasonable position.  And lawyers who don’t like it should thank Zachariah Crabill.

Jack Greiner is a partner at Faruki PLL law firm in Cincinnati. He represents Enquirer Media in First Amendment and media issues

This article originally appeared on Cincinnati Enquirer: Strictly Legal: Can I use AI to help my legal case?