AI Police Reports: Year in Review

(eff.org)

98 points | by hn_acker 4 days ago ago

43 comments

  • futuraperdita 2 hours ago ago

    What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".

    • intended 2 hours ago ago

      Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.

      In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.

      This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.

      Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.

      • simonjgreen an hour ago ago

        Do you have any links you could share to content you found especially insightful about AI use in China?

    • charcircuit 29 minutes ago ago

      AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.

      • opan 16 minutes ago ago

        It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

        If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

        It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.

      • zhoujianfu 24 minutes ago ago

        AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.

      • krainboltgreene 23 minutes ago ago

        Man, what are we supposed to do with people who think the above?

        • gambiting a few seconds ago ago

          I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .

  • eterevsky 21 minutes ago ago

    I think whether any text is written with the help of AI is not the main issue. The real issue is that for texts like police reports a human still has to take full responsibility for its contents. If we preserve this understanding, than the question of which texts are generated by AI becomes moot.

  • 0x_rs 36 minutes ago ago

    I recommend taking a look at this video to get an idea behind the through process (or lack thereof) law enforcement might display when provided with a number of "AI" tools, and even if this one example is closer to traditional face recognition than LLMs, the behavior seems the same. Spoiler: complete submission and deference, and in this specific case to a system that was not even their own.

    https://www.youtube.com/watch?v=B9M4F_U1eEw

  • wyldfire 3 hours ago ago

    > important first step in reigning in AI police reports.

    That should be 'reining in'. "Reign" is -- ironically - - what monarchs do.

    • DetectDefect 3 hours ago ago

      Such innocent mistakes make me smile these days because it gives assurance a real human wrote them.

      • lithocarpus 2 hours ago ago

        Don't worry sufficiently advanced LLMs will learn how to put in the right amount of typoes to be convincing.

        • bgbntty2 2 hours ago ago

          It's not certain that LLMs don't do this already—it's likely their doing this even now.

          • jondwillis 2 hours ago ago

            That’s —— not just —— possible— it’s —— ——— probable!!!

          • fortran77 an hour ago ago

            Are you an LLM that misspelled “they’re” intentionally?

            • bgbntty2 an hour ago ago

              That was the joke. Also the use of the "It's not; it's" structure and the em-dash.

      • cyberax 2 hours ago ago

        Unless it's an LLM instructed to make occasional mistakes.

  • Manheim an hour ago ago

    I find this article strange in its logic. If the use of AI generated content is problematic as a principle I can understand the conflict. Then no AI should be used to "transcribe and interpret a video" at all - period. But if the concern is accuracy in the AI "transcript" and not the support from AI as such, isn't it a good thing that the AI generated text is deleted after the officer has processed the text and finalized their report?

    That said, I believe it is important to aknowlegde the fact that human memory, experience and interpretation of "what really happened" is flawed, isn't that why the body cameras are in use in the first place? If everyone believed police officers already where able to recall the absolute thruth of everything that happens in situations, why bother with the cameras?

    Personally I do not think it is a good idea to use AI to write full police reports based on body camera recordings. However, as a support in the same way the video recordings are available, why not? If, in the future, AI will write accurate "body cam" based reports I would not have any problems with it as long as the video is still available to be checked. A full report should, in my opinion, always contain additional contextual info from the police involved and witnesses to add what the camera recordings not necessarily reflect or contain.

    • nrhrjrjrjtntbt an hour ago ago

      My worry is at scale AI from one vendor can introduce biases. We wont know what those biases are. But whatever they are the same bias affects all reports.

  • avidiax 3 hours ago ago

    This does sound problematic, but if a police officer's report contradicts the body-worn camera or other evidence, it already undermines their credibility, whether they blame AI or not. My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.

    > That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report

    The bigger issue, that the article doesn't cover, is that police officers may not carefully review the AI generated report, and then when appearing in court months or years later, will testify to whatever is in the report, accurate or not. So the issue is that the officer doesn't contradict inaccuracies in the report.

    • parineum 2 hours ago ago

      > My impression is that police don't usually face repercussions for inaccuracies or outright lying in court.

      That's because it's a very difficult thing to prove. Bad memories and even completely false memories are real things.

      • loeg an hour ago ago

        Sure, but other court participants are given somewhat less grace for lying under oath.

        • parineum an hour ago ago

          Are they?

          Perjury isn't a commonly prosecuted crime.

          • sylos an hour ago ago

            If an officer misremembers something about you, you go to jail . If you misremember something about the event, you also go to jail. Yeah, I guess it tracks

          • cwmoore 19 minutes ago ago

            Neither is grace a common defense.

          • loeg an hour ago ago

            That's why I qualified it with "somewhat."

      • BrenBarn 2 hours ago ago

        That's why we need a greatly reduced standard of proof for officer misconduct, especially when it comes to consequences like just losing your job (as opposed to, e.g., jail time).

        • lostnground an hour ago ago

          While I agree that officers should be accountable. More enforcement of them will not suddenly make them good officers. Other nations train their police for years prior to putting them into the thick of it. US police spend far less time studying, and it shows, in everything from de-escalation tactics to general legal understanding. If you create a pipeline to weed out bad officers, then there needs to be a pipeline producing better officers

          • awesome_dude 10 minutes ago ago

            This is an outrageous lie, there were SEVEN Police Academy movies!!!

  • benatkin 2 hours ago ago

    The experiments of AI agents sending emails to grown-ups are good I think – AIs are doing much more dangerous stuff like these AI Police Reports. I don't think making a fuss over every agent-sent email is going to cause other AI incursion into our society to slow down. The Police Report writer is a non-human partially autonomous participant like a K9 officer. It's wishful thinking that AIs aren't going to be set loose doing jobs. The cat is out of the bag.

  • intended 2 hours ago ago

    > In July of this year, EFF published a two-part report on how Axon designed Draft One to defy transparency. Police upload their body-worn camera’s audio into the system, the system generates a report that the officer is expected to edit, and then the officer exports the report. But when they do that, Draft One erases the initial draft, and with it any evidence of what portions of the report were written by AI and what portions were written by an officer. That means that if an officer is caught lying on the stand – as shown by a contradiction between their courtroom testimony and their earlier police report – they could point to the contradictory parts of their report and say, “the AI wrote that.” Draft One is designed to make it hard to disprove that.

    > Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):

    “So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices.

    Policing and Hallucinations. Can’t wait to see this replicated globally.

  • throw-12-16 3 hours ago ago

    “Fighting back” = adding a disclaimer.

    You guys are so fucked.

    • hackyhacky 3 hours ago ago

      > You guys are so fucked.

      "You guys"? Everyone is fucked. This is going to be everywhere. Coming to your neighborhood, eventually.

      • Zaphoos 2 hours ago ago

        Not everyone lives in a 3rd world authoritarian backwater, its time to stop that ridiculous US-centrism

      • throw-12-16 3 hours ago ago

        I dont live in a police state.

        • fouc 2 hours ago ago

          I guess that means you don't live in the US, or in the UK, or in Australia.

        • parineum 2 hours ago ago

          You either don't have police reports or some amount of your country's police reports aee written by AI.

          I'd be more worried that you aren't reading articles about it than if you were.

          • throw-12-16 2 hours ago ago

            Considering that AI can barely write in my native language I am not worried.

            There are countries on this planet that are not actively digging their own graves.

            • jondwillis 2 hours ago ago

              Cmon tell us, Mr. Rammstein’s throwaway, which much-superior country is it?!

              • nrhrjrjrjtntbt an hour ago ago

                He wont tell you. If he did he would have to admit he lives in a police state or martial law.

              • throw-12-16 an hour ago ago

                a/s/l?