Expert Cautions Chatbots May Allow Police to Alter Reports

Expert Cautions Chatbots May Allow Police to Alter Reports

Expert Cautions Chatbots May Allow Police to Alter Reports


### The Surge of AI in Law Enforcement: A Double-Edged Blade?

In recent times, the incorporation of artificial intelligence (AI) into diverse fields has been both praised and analyzed. Law enforcement is no different. The emergence of AI-driven tools like Axon’s Draft One, which produces police reports from body camera recordings, has ignited an intense discourse regarding the future of policing and the judicial system. While the technology claims to enhance time efficiency and operational productivity, it simultaneously provokes substantial issues surrounding accuracy, bias, and the risk of misuse.

#### The Potential of AI in Law Enforcement

Draft One, created by Axon—a firm recognized for its tasers and body cameras—marks a notable advancement in policing technology. This tool, powered by OpenAI’s GPT-4 model, can craft comprehensive police reports within minutes following an incident, relying solely on audio captured from body cameras. This feature is particularly enticing to police agencies, where officers typically invest hours in report writing—a duty many perceive as onerous.

In June 2024, the police department in Frederick, Colorado, became the inaugural department globally to adopt Draft One. The department noted that the tool significantly minimized the time officers devoted to paperwork, enabling them to concentrate more on their core responsibilities. Other departments throughout the United States swiftly adopted similar measures, eager to enjoy the advantages of this groundbreaking technology.

Axon has positioned Draft One as a transformative instrument that can “accelerate justice” by negating the necessity for manual data entry. The firm asserts that the AI-generated reports are as precise, if not more so, than those composed by humans. In a double-blind evaluation, Axon discovered that Draft One’s reports were equivalent to or superior to those created by humans concerning completeness, neutrality, objectivity, terminology, and coherence.

#### The Dangers and Issues

Notwithstanding the evident advantages, the rollout of AI-generated police reports has triggered concerns among legal professionals, civil rights proponents, and digital rights organizations. The primary apprehension involves the capacity for AI to incorporate errors, biases, or even intentional inaccuracies into police reports—documents that are essential to the justice framework.

One of the major dangers is the chance of AI “hallucinations,” where the system fabricates information that is inaccurate or unrelated to the incident. While Axon has asserted that it has mitigated such errors by “dialing down the creativity” of Draft One, the risk lingers. AI systems, including those based on GPT-4, are recognized for occasionally producing erroneous or misleading data, particularly when navigating intricate or nuanced contexts.

Furthermore, reliance on AI-generated reports could exacerbate pre-existing biases in policing. Body camera footage, which documents events from the officer’s viewpoint, already possesses the potential to skew the narrative in favor of law enforcement. If AI systems are trained on biased datasets or are affected by how officers articulate their observations, they could reinforce these biases, culminating in unjust results.

Legal scholars such as Andrew Ferguson have cautioned that AI-generated reports could “digitally contaminate” the evidence-based evolution of criminal proceedings. The apprehension is that AI might subtly modify the narrative in manners that skew police perspectives or mislead judicial entities. This could have expansive implications, especially in instances where the police report constitutes the principal piece of evidence.

#### The Risk of Misapplication

Another issue is the potential for the misuse of AI-generated reports. As the technology gains traction, there exists a risk that officers may employ it to manipulate the portrayal of an incident. For instance, an officer could consciously phrase their comments to sway the AI’s interpretation, leading to a report that favors the officer’s rendition of events, even if it lacks total accuracy.

This concern is heightened by the reality that AI-generated reports might be utilized in more serious scenarios, despite initial advisories to restrict their application to minor occurrences. In some jurisdictions, officers have already begun applying Draft One to a wider array of cases, raising alarms regarding the accuracy and dependability of the reports in more intricate situations.

Civil rights advocates are apprehensive that the extensive integration of AI-generated reports could lead to heightened police scrutiny, particularly in marginalized populations. If the technology simplifies the reporting process for officers, they may become more prone to pursue charges in circumstances where they might have earlier decided to dismiss the matter.

#### The Call for Transparency and Oversight

In light of the potential dangers, experts are advocating for enhanced transparency and oversight concerning the usage of AI-generated police reports. Ferguson has suggested that any implementation of this technology in legal contexts should be accompanied by thorough documentation regarding how AI models were developed, what data they utilized, and how the models were assessed. Such transparency is vital for ensuring that the reports are both accurate and trustworthy.

Additionally, there is an essential need for independent evaluation and auditing of AI-generated reports. Civil rights organizations like the Electronic Frontier Foundation (EFF) have urged police agencies to grant access to these reports for further examination. This would enable impartial experts to evaluate the precision and equitability of the AI-generated reports.