Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're claiming 20% of LLM responses are hallucinations, today?


Nowadays when I do a conventional search for information, the results on all sorts of topics are dominated by obviously LLM slop articles trying their hardest to SEO by padding out the page with tons of tangential dreck. When I can actually scroll through and glean the information I'm looking for, it's wrong in at least some subtle technical detail a significant fraction of the time, yes.

And then, the other day someone showed an example of a "how to configure WireGuard" article, padded to hell, in LLM house style, aimlessly wandering... being hosted on the webpage of a industrial company selling products made out of wire meshes.


No doubt AI slop is a problem. Writing well with AIs is a skill -- there are lots of people who are uncritically just copy/pasting whatever the AI produced on first draft onto the web. But I'd argue that's a "content" problem rather than an AI problem - i.e. the imperative just to publish something to wrap ads around.

You _can_ write well with AI. You _can_ also create good products with AI. It's a tool. You need to learn how to use it.


> You _can_ also create good products with AI. It's a tool. You need to learn how to use it.

The incentives to do so are seriously lacking, however. A big part of why SO had to ban LLM content so firmly is that otherwise hordes of people will literally copy someone else's question into ChatGPT, and copy its answer back into the answer submission form in the hopes of getting some reputation points. It was much worse for bounties, of course, which had largely become ignored by anyone not doing that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: