Two Judges Changes Rulings

Well, here it is — the collision course between AI hype and the courtroom has officially gone from embarrassing to dangerous.

Two different federal judges in two different states just had to pull their own rulings after discovering that the lawyers’ filings they relied on were full of fabricated quotes, incorrect case law, and outright nonsense — the kind of stuff that happens when attorneys start outsourcing their work to ChatGPT and don’t bother to check the results.

In New Jersey, U.S. District Judge Julien Neals had to withdraw his denial of a motion to dismiss in a securities fraud case after lawyers flagged “pervasive and material inaccuracies” in the filings — including made-up quotes and mistaken lawsuit outcomes. Translation: the court relied on bogus info.

Down in Mississippi, U.S. District Judge Henry Wingate faced a similar debacle after issuing a temporary restraining order blocking a state law targeting DEI programs. Lawyers later notified him that the ruling cited declaration testimony from four individuals who weren’t even part of the case record. His order? Based on phantom evidence. Wingate had to replace the ruling altogether.

And in at least one of these cases, it’s confirmed: AI was involved. One person familiar with the Mississippi order told Fox News Digital they had “never seen anything like this” before.

This isn’t just a minor oopsie. Federal judges are now openly sanctioning lawyers for this kind of malpractice. In May, a California judge hit two law firms with $31,000 in fines for filing AI-generated garbage. Just last week, an Alabama judge sanctioned three attorneys for filing ChatGPT-generated briefs loaded with hallucinated quotes, calling it “serious misconduct that demands a serious sanction.” She even referred them to the state bar for discipline.

And here’s the kicker: this problem isn’t going away. According to Pew, 34% of U.S. adults now say they’ve used ChatGPT, double the share from 2023. Among workers under 30? That number jumps to 58%. Which means more and more of these AI “hallucinations” are going to slip into places where accuracy isn’t optional — like legal filings.

The American Bar Association’s rule is crystal clear: lawyers are responsible for the accuracy of their submissions, even if they use AI. And yet, here we are — with judges literally having to retract rulings because attorneys fed them unverified robot-generated nonsense.

This isn’t clever or efficient. It’s malpractice. And if these cases are any indication, the courts aren’t just going to slap wrists — they’re gearing up to make examples out of anyone who tries to pass off AI fantasy as legal fact.

LEAVE A REPLY

Please enter your comment!
Please enter your name here