Recent media reports have publicized court decisions that have imposed penalties for the improper use in submissions to the court of generative artificial intelligence (AI). Despite those cautionary tales, new examples continue apace of attorneys who fail to independently confirm the accuracy of their AI research to ensure that the authority they cite is not a product of AI hallucination, which is inaccurate information made up entirely by the AI tool.
The intent of this short column is not to give advice to the judiciary. Nonetheless, a recent article in a major newspaper disclosed that two federal judges admitted to using AI in rulings that contained hallucination errors that they later needed to retract. Stephen Dinan, “Two federal judges admit to using AI in botched rulings after months of silence,” The Washington Times (Oct. 23, 2025).
Recent opinions from different states also illustrate this continuing problem. See An v. Archblock, Inc., 2025 WL 1024661 (Del. Ch. Apr. 2, 2025) and Ader v. Ader, 87 Misc. 3d 1213(A) (N.Y. Sup. Ct. 2025). In both cases, the use of generative AI, not independently verified, led to the inclusion of AI hallucinated authority in pleadings submitted to the court.
The New York Court in Ader expressed its primary concern as not being the use of AI itself, but the failure of attorneys to confirm the accuracy of factual and legal representations sourced from AI. Attorneys have a duty to verify their work, whether AI-generated or not, to maintain the integrity of their submissions. This cannot be delegated to a software program.
The court noted that the risks and consequences of AI-hallucinated citations are well-documented and underscored that reliance on the research of others is not a valid excuse for presenting false citations. In the Ader case, the defendants’ counsel initially denied using unvetted AI but later conceded that AI was used and not properly verified, leading to the inclusion of false citations and related quotations in defendants’ briefs. Finding that conduct frivolous, the court sanctioned both the defendants and their counsel, requiring them to compensate the plaintiff for reasonable costs and attorney’s fees incurred due to the delay caused by these fake quotes. The court mandated that a copy of its decision be submitted to relevant legal ethics committees to deter such conduct in the future.
The Delaware Court of Chancery in An v. Archblock went one step further and issued, in conjunction with its Letter Opinion denying the offending motion to compel, an Order Requiring Certification on Use of Generative AI that required the offending party to file a certification with all future filings in the case that: 1) confirms whether generative AI was used to prepare the filing; 2) identifies the specific AI platform that was used; 3) identifies the specific pages, paragraphs, or sections of the filing that were created using generative AI; and 4) confirms that any text in the filing created using generative AI had undergone human review “for accuracy and completeness. This includes confirming that any citation to legal authority is accurate and that the authority stands for the cited proposition.”
The pro se party in An v. Archblock had used generative AI to draft a motion to compel discovery. The motion contained both hallucinated citations and hallucinated quotes from cases that did exist. The court stated that “[t]he use of GenAI in legal work is not inherently problematic. [] By enhancing the accessibility of legal services, GenAI can lower barriers to justice. Still, Gen AI carries significant risks to the legal system if it is used carelessly.” An v. Archblock, 2025 WL 1024661, at *2. See generally Govette v. Bongiovani, 2025 WL 2926536 (Del. Ch. Oct. 15, 2005) (court dismissed complaint for, among other reasons, lack of candor to the court when filing fabricated documents and making false allegations).
Although this short column provides only a few examples of the misuse of AI in the legal profession, law professors have examined the risks of AI in the financial marketplace and other segments of society. See, e.g., Tom C.W. Lin, Artificial Intelligence, Misinformation, and Market Misconduct, 85 Ohio St. L.J. 685 (2025).
Practitioners must be wary of allowing the use of AI to replace either legal research they verify or their independent professional judgment.