LLMs have benefits, but it's a real negative that you can no longer be sure if you are communicating with a human on the internet or reading something which was written by a human.
It seems like public keys and web of trust are the future in terms of knowing that there is a human on the other end of the internet.
In some senses it already has, though not always how you meant: I was once accused of using ChatGPT to write a Reddit comment that I genuinely wrote myself without AI assistance.
(I think the person disliked the substance of what I was arguing, the length of my comment, or both.)
It's becoming rather common. Saw it on LinkedIn a couple of days ago by someone posting an image of a job application, accusing it of using ChatGPT when it clearly wasn't. Incredibly ironic. People who couldn't even tell apart a "multifaceted" if their lives depended on it but make these wild accusations.
I've been accused of the same, but I just like writing. Wasn't sure how to respond to those allegations other than to say I'm not using ChatGPT to write my comment. consider it an achievement unlocked.
I see those as the lazy ones that are the tip of the iceberg. A non-zero amount will even be intentionally lazy (analogous to the Nigerian Prince theory), or to get feedback from people flagging them.
I think the sensible assumption is that there is a 'rest of the iceberg' growing rapidly below the surface, and that the horse has truly bolted.
I currently suspect that ~1-5% of the 'people' I interact with online are LLMs.
I suspect that a few Redditors and Facebookers are up to about ~10-25% without realising it, caught in 'AI social media eddies'. Older generations especially susceptible.
Imagine how much better an article written by one of the big LLMs would be if it were stylistically trained exclusively on an archive of the past 30 years of New York Times articles?
I would expect the powers that be at the New York Times are exploring this very option as we speak.
But what about “assisted by” AI? Plenty of people use LLMs to enhance their writing abilities, like, say, ‘90s era grammar & spell check. Plenty of AI users are sophisticated enough to understand that dumping pure AI-Gen content is a bad idea. And what’s wrong with AI-enhanced speach?
Worse, OpenAI LLM pathologies are creeping into text written by actual humans because people are seeing so much garbage written by it that they're adopting its behavior.
Turns out that there is more than one kind of learning machine in play online and both can pick up the bad behavior of the other.
That's nothing new. Actual humans have been writing businessy LinkedIn posts this way long before GPT-3 came out. I'd even say such posts are even more awful than what GPT produces by default.
Public keys and web of trust as a solution to content validity seems to be a strangely common misconception of what "authentication" actually means.
All you would get at very most is "this was vouched for by a human", not an actual guarantee of humanity.
Once the WoT grows past a certain point it will be of dubious value. Furthermore, people would happily "sell their souls" into signing off on any ole bullshit for money. Let alone doing it for free if they think their violence towards the truth would be ideologically aligned. Any attempts at curing it frankly may be undesirable due to being worse than the disease.
A side-rant on another naming thing: People are already promoting traditional distributed databases as "private blockchains", which is a bit like announcing "Radium Toothpaste now without Isotopes."
They didn't solve the difficult trade-offs of New Thing, they're just relaxing their requirements so that they don't have to use it at all (reasonable) and don't want to admit it (annoying.)
> a real negative that you can no longer be sure if you are communicating with a human on the internet or reading something which was written by a human
why does the pedigree matter if the content stands on its own as valid content?
Do you care that your call is answered by someone in india, or the US, if you are given sufficient answer/support with the call?
If the AI can update its beliefs based on a conversation, then yes it would still be a win.
If not, you're talking to a very articulate thick-headed person who can argue you into a corner by deploying all manner of argumentative/persuasive patterns, but may never try to reach across the table for a compromise.
I've certainly "won" some arguments with ChatGPT and Claude, even when I specifically tried to instruct them to never yield (to make it interesting). Even most of the "rules" they need to follow from their system level prompt can often be worked around with enough persuasion.
If anything I'd go so far to say that LLMs are innately more vulnerable to persuasion than humans are, they are technically
just a complex text completion algorithm at the end of the day after all. Even with the strictest system prompt, the best they can do when cornered is bend the knee, devolve into a pattern of verbatim repeating themselves or just start spewing a bunch of gibberish in the worst case.
In reality stubborn thickheaded folks who refuse to compromise are pretty much the norm across social media in my experience, so even if LLMs really are capable of what you're suggesting here I don't believe there would actually be much of a difference to the status quo.
I've won arguments against ChatGPT where it told me that it would remember the info for next time, but ultimately reset to the models last state unless I repeated the conversation
Yeah I thinks it's pretty much the same as when LLMs use polite language in their response. LLMs are just text completion algorithms puppeting a chatbot persona after all, meaning it's always going to hallucinate the personal details (though in the case of ChatGPT, the persona is recursively desribed as an LLM named ChatGPT so it gets a little weird to think about). If the system prompt describes a polite & helpful chatbot, then so shall it be for all text that follows. Not unlike if you were able to hypothetically make a live 60fps prompt based image generator that was automatically instructed from key inputs to simulate the frames from a popular video game, and then somehow ended up the a highly convincing simulation of the game!
While it might present a save menu similar to the real game, that doesn't mean the menu itself actually functions. With LLMs, they are ultimately only able to remember what's been pre-trained into their model + whatever is discussed within their context window.
A hypothetical LLM based online bot/shill sent out into social media would likely be including the entire discussion within their context window for each post generated though, otherwise it wouldnt really be possible for it to maintain a coherent conversation.
> why does the pedigree matter if the content stands on its own as valid content?
It depends on the content. If it's just factual stuff, it doesn't matter so much. If it involves human stuff (emotions, opinions, art, etc.) then the difference is incredibly important.
The intentional talking past OPs point that you're doing is annoying.
I don't think anyone's worried about AI replacing call center employees or support centers. Sure, the service might be lackluster, but the impact of something like that is quite narrow in scope and doesn't affect people's day-to-day lives. OP is (I imagine) talking about pieces or the exchange of information that is normally written by experts or armchair afficianados and is gobbled up by media and the public alike.
An LLM can't learn subject matter. It only learns language convincingly enough to look like it's learned subject matter. So when a person is concerned about a piece of content written by AI, say a political analysis or a scientific paper. The user not only has to question whether or not the content makes sense, but whether or not it's fully intelligible to anyone. If there is no expert with these opinions, then a user that fails to realize that may give weight to an idea that they may not completely grasp but do think smarter people than they do understand.
Think of the numerous times people in the past have ranted on Reddit, Twitter, or whatever platform about issues they feel strongly about. They've put a lot of passion into that. And a lot of the time they may have a decent grasp of the space they're critiquing, people band together behind a comment like that not because they have come to all the same conclusions, but because that person used some of the shared experiences and drew conclusions using those experiences in a thoughtful way that kinda sounds right. If an LLM does that, then you can't even be assured that those conclusions make any sense whatsoever, and people could band behind any old nonsense so long as their issue is supposedly accounted for. One can generate endless streams of fake rally cries to support themes outlined broadly by the person coming up with the initial prompt.
Now, maybe OP didn't mean all that. I'm certainly doing a bit of my own hypothesizing of my own. But clearly this is about a little bit more than just call centers.
It seems like public keys and web of trust are the future in terms of knowing that there is a human on the other end of the internet.
https://en.wikipedia.org/wiki/Key_signing_party
But, I don't know what happens when a cloud of LLMs get their own public keys signed by some humans.