In some senses it already has, though not always how you meant: I was once accused of using ChatGPT to write a Reddit comment that I genuinely wrote myself without AI assistance.
(I think the person disliked the substance of what I was arguing, the length of my comment, or both.)
It's becoming rather common. Saw it on LinkedIn a couple of days ago by someone posting an image of a job application, accusing it of using ChatGPT when it clearly wasn't. Incredibly ironic. People who couldn't even tell apart a "multifaceted" if their lives depended on it but make these wild accusations.
I've been accused of the same, but I just like writing. Wasn't sure how to respond to those allegations other than to say I'm not using ChatGPT to write my comment. consider it an achievement unlocked.
I see those as the lazy ones that are the tip of the iceberg. A non-zero amount will even be intentionally lazy (analogous to the Nigerian Prince theory), or to get feedback from people flagging them.
I think the sensible assumption is that there is a 'rest of the iceberg' growing rapidly below the surface, and that the horse has truly bolted.
I currently suspect that ~1-5% of the 'people' I interact with online are LLMs.
I suspect that a few Redditors and Facebookers are up to about ~10-25% without realising it, caught in 'AI social media eddies'. Older generations especially susceptible.
Imagine how much better an article written by one of the big LLMs would be if it were stylistically trained exclusively on an archive of the past 30 years of New York Times articles?
I would expect the powers that be at the New York Times are exploring this very option as we speak.
But what about “assisted by” AI? Plenty of people use LLMs to enhance their writing abilities, like, say, ‘90s era grammar & spell check. Plenty of AI users are sophisticated enough to understand that dumping pure AI-Gen content is a bad idea. And what’s wrong with AI-enhanced speach?
Worse, OpenAI LLM pathologies are creeping into text written by actual humans because people are seeing so much garbage written by it that they're adopting its behavior.
Turns out that there is more than one kind of learning machine in play online and both can pick up the bad behavior of the other.
That's nothing new. Actual humans have been writing businessy LinkedIn posts this way long before GPT-3 came out. I'd even say such posts are even more awful than what GPT produces by default.