Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>keeps ban on LLM-generated content.

How is this being enforced? It's either bots banning bots in a digital game of whack-a-mole; or humans arbitrarily trying to asses whether something has been written by an LLM or a human.



It's human judgement. Definitely not perfect, but something has to be done to prevent SO from being overrun.

There are some subjective signs that a post is LLM generated, like being overly verbose and making unrelated assumptions, or mix of horrible and perfect grammar. Those bans are hard to justify because the false positive rate is high.

But other signs are pretty obvious. My favorite is the use of APIs that should exist but don't. Passing parameters that neatly solve the problem but have never been accepted, or importing non existent libraries. I'm happy to flag those.


If you read some LLM output, you'll pick up on lots of patterns. LLM generated content isn't terribly difficult to identify.


Yeah, I've been using chatGPT quite intensively. And while pure LLM output is relatively easy to spot, human edited LLM output is almost impossible to detect. Most of my message above has actually been written by GPT4 (3 prompts + some light editing).


It's not. That's why the moderators are on strike.

See: https://meta.stackexchange.com/questions/389811/moderation-s...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: