A Stanford University “misinformation expert” has been accused of using artificial intelligence (AI) to create testimony that was later used by Minnesota Attorney General Keith Ellison in a politically charged case.
Jeff Hancock, a communications professor and founder of the school’s Social Media Lab, provided expert testimony in a lawsuit involving a satirical conservative YouTuber named Christopher Coles. The lawsuit concerns Minnesota’s recent ban on political deepfakes, which the plaintiffs claim is an attack on free speech.
Hancock’s testimony was submitted to the court by Ellison, who argued in favor of the law. According to Stanford University’s website, Hancock is “best known for his research into how people use technology to exploit deception, from sending texts and emails to detecting fake online reviews.” It is said that
But lawyers for the plaintiffs asked the federal judge in Minnesota hearing the case to dismiss Hancock’s testimony, saying he cited false research.
What is artificial intelligence (AI)?
“Professor Jeff Hancock’s declaration cites research that does not exist,” the lawyers argued in a recent 36-page memo. “The article with the title does not exist.”
The “study” is called “The Effects of Deepfake Videos on Political Attitudes and Behavior,” and is said to have been published in the Journal of Information Technology & Politics. The Nov. 16 filing notes that the journal is genuine but has never published research under that name.
“The publication exists, but the cited pages belong to unrelated articles,” the lawyers argued. “Perhaps this study was an ‘illusion’ generated by an AI large-scale language model like ChatGPT.”
“Plaintiffs do not know how this hallucination was reflected in Hancock’s statement, but they question the document as a whole, especially when much of the commentary contains no methodology or analytical logic.” I will throw it.”
The document also criticizes Ellison, claiming that “Ellison’s most trusted conclusions have no methodology behind them and consist solely of expert opinion.”
“Hancock could have cited actual research similar to the proposition in paragraph 21,” the memo said. “However, by the existence of the fictitious quotation, Hancock (or his assistants) did not even attempt to cast doubt on the quality and veracity of the entire declaration.”
Biden’s executive order against ‘woke’ artificial intelligence, called a ‘social cancer’
The memo also further strengthens the argument that the citations are false, noting that the attorneys conducted multiple searches to find the study.
“Neither the title nor even any part of the alleged article appears anywhere on the Internet indexed by the most commonly used search engines, Google or Bing,” the document states. is stated. “A search on Google Scholar, a specialized search engine for academic papers and patent publications, does not yield any papers that match the description of the citations written by ‘fans’ (purported authors) that include the term ‘deepfake’. ”
“Maybe this was just a copy-and-paste error? Not so,” the filing later flatly states. “That article doesn’t exist.”
The lawyers concluded that if the declaration was partially fabricated, it was completely unreliable and should be excluded from consideration by the court.
“Professor Hancock’s declaration should be completely excluded because it is based, at least in part, on fabricated material believed to have been generated by an AI model, casting doubt on its conclusive claims,” the document said. concludes. “The court will investigate the source of the fabrication and further action may be warranted.”
CLICK HERE TO GET THE FOX NEWS APP
Fox News Digital has reached out to Ellison University, Hancock University and Stanford University for comment.