Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An organisational administrator can stop their team using Copilot and tell them not to use other tools. If the officer had used tools after that then they should indeed be disciplined


A gen AI tool doesn't spontaneously open up and say "Hey, Maccabi Tel Aviv fans are hooligans". The intent is usually from the user. It's quite possible that the officer prompted the AI while in Word with something like "Help me write a reason we should ban these fans".


> stop their team using Copilot

> If the officer had used tools after

They didn't use tools. They did a Google search and assumed the results didn't originate from an AI tool.

The lesson from the article is that even if you don't use AI tools, AI content may still creep into your investigation.


They specifically used Copilot according to the article.


It doesn't say they used Copilot. It says they used output *from* Copilot.

> his force used fictional output from Microsoft Copilot

That doesn't mean they used Copilot; it only means that the content originates from Copilot. They apparently got it from Google search result:

> officers had found this material through a Google search

And apparently that source either used Copilot or the source of their source used Copilot, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: