AI “Lawyer” Causes Chaos in New York Courtroom

Sansoen Saengsakaorat

A New York appellate courtroom was thrown into chaos on March 26 after judges realized the attorney arguing a case wasn’t a person at all — but a synthetic avatar generated by artificial intelligence.

Jerome Dewald, a plaintiff representing himself in an employment dispute, appeared before the New York State Supreme Court’s Appellate Division in Manhattan. Rather than speaking directly, Dewald submitted a pre-recorded video to make his argument. The video opened with a sharp-looking man in a button-down shirt, seated in what appeared to be a home office. “May it please the court, I come here today a humble pro se before a panel of five distinguished justices,” the avatar began.

It didn’t take long for alarm bells to ring. Justice Sallie Manzanet-Daniels interrupted the video almost immediately, demanding clarification.

Watch the latest video at foxnews.com

“Hold on,” she said. “Is that counsel for the case?”

Dewald admitted flatly: “I generated that. That’s not a real person.”

The admission set off a storm of disapproval. Manzanet-Daniels made it clear the court had not been informed about the nature of the video and scolded Dewald for withholding that information. “It would have been nice to know that when you made your application,” she said sternly. “You did not tell me that, sir.”

Dewald later submitted a written apology, explaining that he couldn’t afford an attorney and meant no harm. He told the Associated Press that he had initially considered using an avatar in his own likeness but opted instead for a professionally designed character created by a San Francisco-based tech company.

“The court was really upset about it,” Dewald said. “They chewed me up pretty good.”

No ruling has been issued yet on whether Dewald will face formal sanctions, but legal experts are already warning that this could be the first in a wave of courtroom controversies sparked by AI misuse.

This isn’t the first time artificial intelligence has landed someone in hot water inside a courtroom. Last year, two New York attorneys were fined $5,000 each after submitting a legal brief filled with fake citations — all generated by ChatGPT. In another case, lawyers representing Michael Cohen, Trump’s former personal attorney, filed court documents citing completely fabricated rulings. Cohen blamed the error on a legal contractor who allegedly didn’t understand that the AI tool they were using could hallucinate case law.

AI-generated content is increasingly making its way into the legal system, but this may be one of the most egregious examples of someone actively trying to pass off a machine as a licensed attorney in front of a real panel of judges. While Dewald claims he didn’t mean to deceive, critics argue that this kind of behavior undermines the integrity of the legal process and opens the door to dangerous precedent.

The incident also highlights a deeper tension brewing in the legal world: how to handle the rapid advancement of artificial intelligence in an industry that depends on truth, precedent, and procedure. Judges are already facing a wave of AI-written filings, fake citations, and even deepfaked exhibits. Now they’re being asked to rule on whether an artificial lawyer can appear in court — and most aren’t having it.

There’s no question that AI tools can offer valuable assistance to pro se litigants, researchers, and lawyers alike. But when it comes to standing before a judge and arguing a case, impersonating legal counsel — even with the click of a mouse — is still crossing a major ethical line.

In Dewald’s case, the AI didn’t make a legal error — but it did violate the court’s expectations for transparency and integrity. If that line keeps getting pushed, it’s only a matter of time before some court somewhere decides to push back hard.

For now, Dewald gets to be the guy who introduced a virtual mouthpiece to a real courtroom. The judges? They were not impressed.