And they get confident lies back 2/10 times, not enough to make them constantly question the answers, but just enough to _really_ mess with them over time. And not just that, almost no new good questions and answers are created anymore for the next iterations of LLM's to train on, so they'll be consuming more and more AI-centered SEO slop until that ratio becomes so high it becomes useless. But by then most of us who used to answer human questions won't give a s*t anymore. You reap what you sow.
Best of both worlds? I disagree vehemently. At least my job is secure; experts with decades of experience are in high demand, and I can be even more selective who I decide to work for. I'm done contributing for free to the corpo Internet though.
Stackoverflow was never about getting the best code and neither are LLMs. People ask questions to figure out how to do something. If an LLM provides an answer that doesn't work, then they will just ask it to do it another way and hope that one works. Experts will always be in high demand, but on the other side there are a lot of jobs that just want a working product, even if it's not well made.
Nowadays when I do a conventional search for information, the results on all sorts of topics are dominated by obviously LLM slop articles trying their hardest to SEO by padding out the page with tons of tangential dreck. When I can actually scroll through and glean the information I'm looking for, it's wrong in at least some subtle technical detail a significant fraction of the time, yes.
And then, the other day someone showed an example of a "how to configure WireGuard" article, padded to hell, in LLM house style, aimlessly wandering... being hosted on the webpage of a industrial company selling products made out of wire meshes.
No doubt AI slop is a problem. Writing well with AIs is a skill -- there are lots of people who are uncritically just copy/pasting whatever the AI produced on first draft onto the web. But I'd argue that's a "content" problem rather than an AI problem - i.e. the imperative just to publish something to wrap ads around.
You _can_ write well with AI. You _can_ also create good products with AI. It's a tool. You need to learn how to use it.
> You _can_ also create good products with AI. It's a tool. You need to learn how to use it.
The incentives to do so are seriously lacking, however. A big part of why SO had to ban LLM content so firmly is that otherwise hordes of people will literally copy someone else's question into ChatGPT, and copy its answer back into the answer submission form in the hopes of getting some reputation points. It was much worse for bounties, of course, which had largely become ignored by anyone not doing that.
Best of both worlds? I disagree vehemently. At least my job is secure; experts with decades of experience are in high demand, and I can be even more selective who I decide to work for. I'm done contributing for free to the corpo Internet though.