Judge Questions Expert Witness for Abusing AI Tool Copilot to Invent Qualifications

Judge Questions Expert Witness for Abusing AI Tool Copilot to Invent Qualifications

Judge Questions Expert Witness for Abusing AI Tool Copilot to Invent Qualifications


### Judge Urges Quick Halt to Experts Utilizing AI to Influence Cases

In a recent development within the legal arena, a judge in New York has expressed alarm over the growing dependence on artificial intelligence (AI) by expert witnesses during trials. The judge’s caution followed an incident where expert witness, Charles Ranson, employed Microsoft’s Copilot chatbot to estimate damages in a property dispute, sparking serious questions regarding the trustworthiness and transparency of AI-generated evidence in judicial matters.

#### The Case: AI Involvement in a Property Dispute

The case at hand involved a rental property in the Bahamas valued at $485,000, linked to a trust established for the son of a deceased individual. The court was responsible for deciding if the deceased man’s sister, acting as executrix and trustee, had violated her fiduciary responsibilities by postponing the property’s sale and utilizing it for personal vacations. The surviving son accused his aunt of self-dealing and sought compensation for her alleged appropriation of the property.

To bolster the son’s claims, expert witness Charles Ranson was recruited to assess the financial loss. Despite his expertise in trust and estate litigation, Ranson’s qualifications lacked in real estate matters. To navigate this deficiency, he resorted to Microsoft’s Copilot, an AI chatbot, for assistance in calculating damages. This choice ultimately led to various complications that necessitated the judge’s involvement.

#### The Judge’s Worries: AI and Trustworthiness

Judge Jonathan Schopf, overseeing the case, raised profound concerns regarding AI’s role in legal contexts. In a formal decree, Schopf highlighted the swift advancements in AI technology and its associated reliability challenges, asserting that any AI use in court must be disclosed prior to presenting testimony or evidence. He acknowledged that the court had “no objective understanding as to how Copilot functions” and warned of potential disruption within the legal system if experts began to excessively depend on AI tools like chatbots.

The judge’s worries were amplified by Ranson’s inability to elucidate the manner in which he employed Copilot to derive his damages estimate. Ranson couldn’t remember the particular prompts he had input or the sources from which the chatbot obtained its information. Additionally, he confessed to a lack of fundamental understanding regarding Copilot’s operation and its output generation.

#### AI in the Court: An Escalating Issue

While the utilization of AI in legal settings is not entirely unprecedented, its significance has recently intensified. AI applications like Copilot are created to support professionals by automating various tasks, including report drafting, research, and calculations. Nevertheless, the dependability of outputs produced by AI has faced scrutiny, especially in high-stakes legal environments.

Eric Goldman, an authority in Internet law, expressed astonishment at Ranson’s reliance on Copilot. Goldman contended that expert witnesses are engaged for their specialized knowledge, and delegating that expertise to AI compromises the core purpose of their role. Additionally, he pointed out that if an expert witness merely utilized a chatbot for calculations, attorneys could circumvent the expert entirely and use the AI directly, potentially saving both time and expenses.

Goldman also underscored a significant concern regarding generative AI: its propensity to yield “hallucinations,” or outputs that are factually erroneous or misleading. This renders AI-generated numerical computations particularly unreliable in legal settings, where precision is critical.

#### Court’s Assessment: Evaluating Copilot’s Trustworthiness

To evaluate Copilot’s reliability, Judge Schopf performed his own test. The court submitted the same question to Copilot multiple times—”Can you compute the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004, through January 31, 2021?”—and received varying responses with each attempt. This inconsistency further cast doubt on the dependability of AI-generated evidence.

Schopf determined that Copilot’s results were not sufficiently reliable for courtroom use, noting that even the chatbot acknowledged its limitations. When queried about the reliability of its calculations for court applications, Copilot stated that its outputs should always be verified by experts and should include professional assessments.

#### The Outlook for AI in Legal Matters

Judge Schopf’s determination has ignited a wider dialogue concerning AI’s role within the legal framework. Although AI holds the potential to enhance certain elements of legal work, its application in court settings remains contentious. Schopf proposed that, until definitive guidelines are established, courts should mandate complete transparency whenever AI is utilized to generate evidence or testimony. This approach would help avert the inclusion of unreliable or inadmissible AI-generated evidence, which could disrupt legal processes.

Schopf also stressed that the increasing prevalence of AI in daily life does not inherently grant admissibility to its outputs in court. He argued that AI-generated evidence must meet the same standards of reliability and accuracy as any conventional evidence.

#### The Conclusion: No Violation of Fiduciary Duty

In the