AI Allegations Made Against Professor Who Wrote Key Report

A politically charged case in Minnesota has taken an unexpected twist with accusations that a Stanford University “misinformation expert” relied on a non-existent study, potentially generated by artificial intelligence, in his testimony. The controversy centers around Jeff Hancock, a professor of communications and founder of Stanford’s Social Media Lab, whose expert declaration was submitted in support of Minnesota Attorney General Keith Ellison’s defense of a law banning political deepfakes.

The case involves Christopher Kohls, a satirical conservative YouTuber challenging Minnesota’s law on political deepfakes as a violation of free speech. The plaintiffs argue that the ban unconstitutionally stifles protected speech, while the state, represented by Ellison, defends it as necessary to combat misinformation.

Hancock’s declaration was intended to bolster the state’s argument by emphasizing the potential influence of deepfake videos on political attitudes. However, the credibility of his testimony has been thrown into question after the plaintiffs’ lawyers discovered a glaring issue.

In a memo filed on November 16, the plaintiffs’ attorneys pointed out that Hancock’s declaration cited a study titled The Influence of Deepfake Videos on Political Attitudes and Behavior, purportedly published in the Journal of Information Technology & Politics. However, the lawyers found no trace of such a study.

The journal is real, but the cited pages correspond to unrelated articles, and extensive searches across Google, Google Scholar, and Bing failed to locate any record of the supposed research. The memo bluntly states: “The article doesn’t exist.”

The attorneys further speculated that the phantom study might have been fabricated by an AI large language model like ChatGPT. This phenomenon, known as “hallucination,” occurs when AI systems generate realistic-sounding but entirely false information.

The discovery has cast doubt over the entirety of Hancock’s declaration. Plaintiffs argue that if one citation is fabricated, it undermines the reliability of his broader conclusions, which they claim lack robust methodology or analytic logic. The filing calls on the judge to exclude Hancock’s testimony altogether, citing its reliance on unverifiable or potentially fabricated evidence.

“The court may inquire into the source of the fabrication and additional action may be warranted,” the memo states, hinting at the possibility of further scrutiny into how the supposed study made its way into Hancock’s declaration.

The memo also criticized Attorney General Ellison for relying on Hancock’s declaration without questioning its accuracy. It asserts that the conclusions Ellison emphasizes in his defense of the deepfake law are unsupported by reliable evidence, amounting to mere “expert say-so.”

While Hancock has not commented on the accusations, the incident raises significant concerns about the role of AI in academic and legal contexts. Whether the citation was an honest mistake, a case of carelessness, or deliberate misuse of AI-generated content remains unclear.

The controversy underscores the growing risks of AI misuse in high-stakes environments. As AI tools become increasingly integrated into academic and professional workflows, the potential for errors—or deliberate manipulation—poses challenges for courts, policymakers, and experts alike.

LEAVE A REPLY

Please enter your comment!
Please enter your name here