As someone who has read thousands of police reports, I'm of mixed thoughts about this. Obviously we have this current problem of AI hallucinations.
Generally police reports are very poor. Police don't want to write them. They are usually incredibly short, incredibly vague, and always written to fit the narrative the officer is trying to present.
What happens is that there is usually a substantial delay between arrest and trial, so the officer will have forgotten the incident by that point (he has had 1000 other crimes to tackle). Day of trial the prosecutor will call the police to their office in the morning and get them to read the report and "refresh" their memory. We know how human memory is.
On the one hand, writing a comprehensive report using bodycam footage might improve the quality and veracity of the reports, and potentially cut down on improper charges and improper convictions.
But, it will (a) make police totally lazy about reports, and (b) sometimes either hallucinate, or mischaracterize an interaction that an actual human would understand differently.
And that is the problem. Those officers who barely remember the incident’s details use those reports to “refresh” their memory, and then testify to these details that they no longer recall as “facts”. Given that a police report is effectively testimony, why would anyone consider allowing AI written reports as facts/evidence? I would rather a poorly written accurate report of what was witnessed by an officer, over an AI written report that “resembles” the facts witnessed.
Yeah, it's literally adding noise to a report to make it longer, while also encouraging falsified memories inside the officers.
As an absolute bare minimum, all the initial data that was fed into the LLM needs to be preserved for the lifetime of the generated report and available to defense counsel.
This actually seems like a good idea. The AI would be good at listening and presenting open-ended questions to get more detail out of the officer. It could dig in to parts of the story where it thinks more information is needed.
I don't see your scenario as really valid, as we still have mountains of bodycam footage & audio, and AI could be used to supplement or validate human reports. I really like the use of video, axon's tools and even AI to be available when needed, but do not like the idea of AI being used as the primary interpreter and source of truth record.
This is a false setup; you don't need to trust either. I'd prefer a law that makes the bodycam footage public & accessible by default. It could even be automated with geo-tagging where & when the recording happened.
The fact that police cars can start without the body cams being fully charged, the fact that they don't stream to a centralized and reliable server 100% of the time that they're worn, and the fact that they're possible to turn off while an officer is on duty with effectively zero consequences makes it pretty clear that no one in power cares about bodycams being a tool for the people. They're just weaponized against the populace.
While I appreciate the intention, it seems like this could end up being a California cancer disclosure situation as the usefulness of AI continues to improve. For example, does spellchecking / grammar count? What about checking for factual inconsistencies?
Something like: “Required notice: this report was written partially or fully with a keyboard.”
> and requires officers to legally certify that the report was checked for accuracy.
I think that part of the bill should be enough. I'm fine with policy officers using AI to pull in details about locations, check for accuracy, etc. But officers must be accountable for the accuracy of what ends up in the report.
When I see "Utah Bill" I think of a sheriff on a horse in a cowboy hat with a six-gun riding into town to make sure that that the local lawmen aren't slacking or doing anything disrespectful :)
Well, when nobody's made a comment a new submission can drop into oblivion faster than ever.
I try to do my part even if I am a man of few words and don't have a lot to say ;)
It's worth it to get the opportunity for an additional comment or two from somebody knowledgeable like there is now,
Sometimes I do have a comment on a controversial subject that ends up negative, even though there are lots of ups, they are outweighed by at least one downer. Doesn't bother me plus I often see lots of agreement before it ends up.
I didn't think this was going to be one of them :\
It requires agencies to come up with generative AI policies, requires disclosures on AI-generated content even if it is only partially generated by AI, and it requires AI-generated content to have a human review and certification by the specific person who is certifying it.
> This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy.
I have a friend in law enforcement who was part of an early trial group for Axon Draft One (which is called out specifically in the article) and has been using the product for maybe a year now (if not longer). I want to point out that Draft One literally already does this. Here's a video from last year with the PM of the product talking about exactly that (https://www.youtube.com/watch?v=QRMw5RjNjO0&t=290s). I also believe that if a report includes content that was AI generated it is marked as such, but I'm less certain on that part.
I fully support making these things par for the course to ensure any any competitors entering the space have to follow the same pattern as well as ensuring that when this is brought up in court it's clear to all sides that AI was used. That said, AFAIK, Draft One is really the only product doing this right now and already does what the bill requires, so this bill won't really change anything in the present day other than ensuring that a standard is set.
What people don't realize though is that when submitting a report, there are almost always multiple levels of approval for each report needed from a supervisor and a records clerk and regardless of whether the report was AI generated or written completely by the officer it's court admissible and the officer needs to be able to testify to what is in the report. If there's hallucinations in the report and that's called out in court, it's not like people are just going to go "oopsie" and throw up their hands. A defense attorney will absolutely use that to their advantage and could very likely lead the person who is charged going free. Again, I realize that the hard part for a defense is finding and proving that those hallucinations are actually hallucinations, but still, all parties involved have a vested interested in keeping hallucinations to as close to 0 as possible.
Maybe I'm a little to close to this, but it seems a little disingenuous that the EFF article doesn't mention that the main product on the market that would be regulated by this bill already complies with the bill, since it paints the story and Axon in a little bit of a different light.
This would be interesting it it passes, but proposed bills are mostly off topic on HN because they rarely amount to much.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
As someone who has read thousands of police reports, I'm of mixed thoughts about this. Obviously we have this current problem of AI hallucinations.
Generally police reports are very poor. Police don't want to write them. They are usually incredibly short, incredibly vague, and always written to fit the narrative the officer is trying to present.
What happens is that there is usually a substantial delay between arrest and trial, so the officer will have forgotten the incident by that point (he has had 1000 other crimes to tackle). Day of trial the prosecutor will call the police to their office in the morning and get them to read the report and "refresh" their memory. We know how human memory is.
On the one hand, writing a comprehensive report using bodycam footage might improve the quality and veracity of the reports, and potentially cut down on improper charges and improper convictions.
But, it will (a) make police totally lazy about reports, and (b) sometimes either hallucinate, or mischaracterize an interaction that an actual human would understand differently.
They definitely need to be flagged though.
And that is the problem. Those officers who barely remember the incident’s details use those reports to “refresh” their memory, and then testify to these details that they no longer recall as “facts”. Given that a police report is effectively testimony, why would anyone consider allowing AI written reports as facts/evidence? I would rather a poorly written accurate report of what was witnessed by an officer, over an AI written report that “resembles” the facts witnessed.
Yeah, it's literally adding noise to a report to make it longer, while also encouraging falsified memories inside the officers.
As an absolute bare minimum, all the initial data that was fed into the LLM needs to be preserved for the lifetime of the generated report and available to defense counsel.
Could the writing process be replaced by a recorded conversation instead?
Talk to an AI, answer their questions, the convo is recorded and then they can access both the recording and transcript before trial
I don't think that's going to stop memory faltering, or motives/attitudes toward this work, or meaningfully reduce the work required
This actually seems like a good idea. The AI would be good at listening and presenting open-ended questions to get more detail out of the officer. It could dig in to parts of the story where it thinks more information is needed.
I don't see your scenario as really valid, as we still have mountains of bodycam footage & audio, and AI could be used to supplement or validate human reports. I really like the use of video, axon's tools and even AI to be available when needed, but do not like the idea of AI being used as the primary interpreter and source of truth record.
I would rather have an AI write the incident reports, I trust AI more than I trust police to be objective.
This is a false setup; you don't need to trust either. I'd prefer a law that makes the bodycam footage public & accessible by default. It could even be automated with geo-tagging where & when the recording happened.
The fact that police cars can start without the body cams being fully charged, the fact that they don't stream to a centralized and reliable server 100% of the time that they're worn, and the fact that they're possible to turn off while an officer is on duty with effectively zero consequences makes it pretty clear that no one in power cares about bodycams being a tool for the people. They're just weaponized against the populace.
While I appreciate the intention, it seems like this could end up being a California cancer disclosure situation as the usefulness of AI continues to improve. For example, does spellchecking / grammar count? What about checking for factual inconsistencies?
Something like: “Required notice: this report was written partially or fully with a keyboard.”
Yes and hopefully judges will automatically reject anything that doesn’t attest that zero AI was used.
The idea of someone being convicted on a report nobody could be bothered to write is so outrageous.
"does spellchecking / grammar count"
I feel like if more than 10% of your report is being written by the grammar and spelling checkers we've got some bigger problems to worry about.
> and requires officers to legally certify that the report was checked for accuracy.
I think that part of the bill should be enough. I'm fine with policy officers using AI to pull in details about locations, check for accuracy, etc. But officers must be accountable for the accuracy of what ends up in the report.
> officers must be accountable
They haven't been held accountable for anything else, why would it start now?
When I see "Utah Bill" I think of a sheriff on a horse in a cowboy hat with a six-gun riding into town to make sure that that the local lawmen aren't slacking or doing anything disrespectful :)
Suppose you haven’t been here.
Well, when nobody's made a comment a new submission can drop into oblivion faster than ever.
I try to do my part even if I am a man of few words and don't have a lot to say ;)
It's worth it to get the opportunity for an additional comment or two from somebody knowledgeable like there is now,
Sometimes I do have a comment on a controversial subject that ends up negative, even though there are lots of ups, they are outweighed by at least one downer. Doesn't bother me plus I often see lots of agreement before it ends up.
I didn't think this was going to be one of them :\
The bill text is only two pages and the actual clauses are on only on the second page. Take a look:
> https://le.utah.gov/Session/2025/bills/introduced/SB0180.pdf
It requires agencies to come up with generative AI policies, requires disclosures on AI-generated content even if it is only partially generated by AI, and it requires AI-generated content to have a human review and certification by the specific person who is certifying it.
> This policy would mandate that police reports created in whole or in part by generative AI have a disclaimer that the report contains content generated by AI and requires officers to legally certify that the report was checked for accuracy.
I have a friend in law enforcement who was part of an early trial group for Axon Draft One (which is called out specifically in the article) and has been using the product for maybe a year now (if not longer). I want to point out that Draft One literally already does this. Here's a video from last year with the PM of the product talking about exactly that (https://www.youtube.com/watch?v=QRMw5RjNjO0&t=290s). I also believe that if a report includes content that was AI generated it is marked as such, but I'm less certain on that part.
I fully support making these things par for the course to ensure any any competitors entering the space have to follow the same pattern as well as ensuring that when this is brought up in court it's clear to all sides that AI was used. That said, AFAIK, Draft One is really the only product doing this right now and already does what the bill requires, so this bill won't really change anything in the present day other than ensuring that a standard is set.
What people don't realize though is that when submitting a report, there are almost always multiple levels of approval for each report needed from a supervisor and a records clerk and regardless of whether the report was AI generated or written completely by the officer it's court admissible and the officer needs to be able to testify to what is in the report. If there's hallucinations in the report and that's called out in court, it's not like people are just going to go "oopsie" and throw up their hands. A defense attorney will absolutely use that to their advantage and could very likely lead the person who is charged going free. Again, I realize that the hard part for a defense is finding and proving that those hallucinations are actually hallucinations, but still, all parties involved have a vested interested in keeping hallucinations to as close to 0 as possible.
Maybe I'm a little to close to this, but it seems a little disingenuous that the EFF article doesn't mention that the main product on the market that would be regulated by this bill already complies with the bill, since it paints the story and Axon in a little bit of a different light.