AI and Misinformation Scandal: Stanford Expert Charged $600 per Hour for AI-Generated Court Document With Fake Citations
A controversy has emerged involving Jeff Hancock, a Stanford communication professor and expert in technology and misinformation, who stands accused of using artificial intelligence (AI) to generate a court document with fabricated citations. The case has raised significant concerns about the reliability of AI-generated content and its potential to spread misinformation in legal and academic settings.
AI-Generated Misinformation in Legal Settings
In November, Hancock filed a 12-page declaration in defense of Minnesota’s 2023 law criminalizing the use of deepfakes to influence elections. The law, part of the state’s efforts to combat misinformation in politics, has been challenged by Republican State Representative Mary Franson and conservative satirist Christopher Kohls.
Hancock, acting as an expert witness for the Minnesota Attorney General, argued that deepfakes can enhance the persuasiveness of misinformation and evade traditional fact-checking methods.
Despite claiming under penalty of perjury that his declaration was “true and correct,” Hancock’s cited sources seem to be the product of AI hallucinations, a phenomenon where generative AI tools invent details, such as non-existent academic papers, without the user’s knowledge.
The Role of AI in Misinformation and Hallucinated Citations
This AI and misinformation issue was flagged by Franson and Kohls’ attorney, Frank Berdnarz, who argued that Hancock’s citation errors were likely a result of AI hallucinations.
“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” Berdnarz wrote in a filing. This revelation raises questions about the trustworthiness of AI models and their potential to create and disseminate false information, even in formal and high-stakes settings like court cases.
Hancock’s defense of deepfakes as a threat to democracy adds another layer to the irony of the situation, with the professor inadvertently contributing to the spread of misinformation through AI-generated content. This incident highlights the dangers of relying on generative AI tools without rigorous verification of the information they produce.
A Growing Concern in AI Misinformation Research
The use of AI in misinformation research is a growing field, with experts like Hancock focusing on the ways AI-generated media, including deepfakes and AI-generated briefings, can influence public opinion and political outcomes. However, incidents like this underscore the challenges in ensuring the accuracy and reliability of AI-generated content.
As AI models become more advanced, there is an increasing need for caution and transparency in their application, particularly in fields that rely heavily on factual accuracy, such as law and academia. The scandal also raises important questions about how generative AI tools should be used in professional settings. With the potential for AI hallucinations to alter the quality and integrity of critical documents, the case serves as a cautionary tale about the need for careful oversight in AI usage, especially when dealing with sensitive or authoritative information.
The post AI and Misinformation Scandal: Stanford Expert Charged $600 per Hour for AI-Generated Court Document With Fake Citations appeared first on eWEEK.