AI-Created Evidence More Frequently Utilized in Courtrooms: Present Perspectives

AI-Created Evidence More Frequently Utilized in Courtrooms: Present Perspectives

AI-Created Evidence More Frequently Utilized in Courtrooms: Present Perspectives


George R. R. Martin is taking legal action against ChatGPT for copyright violation. A novel concern regarding AI usage has emerged with its advent in legal settings.

A property disagreement escalated to the California judiciary in Mendones v. Cushman & Wakefield, Inc. A video was presented as purported witness evidence. Judge Victoria Kolakowski, however, sensed that something was awry with the footage. It was ultimately revealed that the video was a deepfake generated by AI. The term deepfake refers to AI-produced media that imitates an individual’s voice or likeness performing actions they did not actually undertake. In spite of the submitting party contending that it was the judge’s responsibility to demonstrate the video’s AI origin, the case was thrown out.

This incident has raised alarms within the legal framework as well as among citizens nationwide. While AI can be advantageous in legal proceedings, such as elucidating evidence or generating models for enhanced comprehension, it can also be employed for deceptive purposes. There are widespread fears that fabricated audio recordings, images, or videos could be used against individuals, and judges are apprehensive about AI deepfakes potentially leading to wrongful convictions.

Initiatives to tackle AI within the legal realm