AI Police Reports: Year In Review

Law enforcement agencies rapidly adopted AI tools for writing police reports in 2024, raising serious concerns about accuracy, transparency, and accountability in criminal justice proceedings.

The Problem with AI Police Reports

Police departments across the United States embraced generative AI tools, particularly Axon’s Draft One, to automate report writing. This technology analyzes body camera audio and generates police reports that officers can edit before submission. However, the system creates fundamental problems for justice and transparency.

The King County prosecuting attorney’s office in Washington state banned police from using AI-generated reports, stating: “We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now… For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.”

Designed to Defy Transparency

Axon deliberately designed Draft One to eliminate evidence of AI involvement. When officers export their final reports, the system erases the initial AI-generated draft. This creates a dangerous accountability gap—if an officer lies in court and their testimony contradicts their report, they can claim “the AI wrote that part” without any way to verify the claim.

An Axon product manager explicitly confirmed this design choice: “We don’t store the original draft and that’s by design… the last thing we want to do is create more disclosure headaches for our customers.”

This opacity makes it nearly impossible for defense attorneys, judges, or the public to determine which portions of police reports came from AI versus human officers.

Rapid Proliferation Through Bundling

AI police reports spread quickly because Axon dominates the body camera market and bundles Draft One with camera purchases. Police departments often acquire AI report-writing capabilities without specifically requesting them, making oversight and public awareness difficult.

Legislative Pushback Begins

Two states passed important transparency requirements in 2024:

Utah’s SB 180 requires police reports created with AI to include disclaimers and officer accuracy certifications.

California’s SB 524 goes further by mandating:

  • Clear disclosure when AI assists in report writing
  • Retention of original AI drafts for review
  • Prohibition on vendors sharing police data with third parties

These laws represent the first legislative attempts to address AI police reports’ transparency problems.

What’s at Stake

Police reports influence every stage of criminal proceedings—from initial charges to plea negotiations to trial outcomes. When AI generates these critical documents without transparency or accountability mechanisms, the entire justice system’s integrity suffers.

The technology remains unproven for such high-stakes applications, yet departments continue adopting it at scale.

Next Steps

More states will likely follow California and Utah in regulating or banning AI police reports. Privacy advocates and civil liberties organizations continue pushing for stronger oversight and transparency requirements.

Citizens can request records about their local departments’ AI usage, though Axon’s design makes such requests challenging. EFF provides guidance for crafting effective public records requests to uncover AI police report usage.

The fight over AI police reports reflects broader questions about algorithmic accountability in criminal justice—questions that will only grow more urgent as these technologies proliferate.