Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is only a problem for someone terminally online. The vast majority of people talk to their friends and coworkers in person.


That was the solution that came to mind to me too, but it doesn't work either.

Even if you're never online and only talk to people in person... over time those people will be increasingly informed by LLM-generate pseudo-knowledge. We aren't just training the AIs. They're training us back.

If you want to live in a society where the people you interact with have brains mostly free of AI-generated pollution, then I'm sorry but that world isn't going to be around much longer. We are entering the London fog era of the Information Age.


I don't trust my friends for medical advice. Some of them trust me for plant advice, and they really probably shouldn't. I am very stove-piped.

We have two and a half generations of people right now most of whom think "I did the research" means "I did half as much reading as the average C student does for a term paper, and all of that reading was in Google."

And Alphabet fiddles while Google burns. This is going to end in chaos.


> "I did the research" means "I did half as much reading as the average C student does for a term paper

What's the alternative? No one who says that is saying they did original research, they're saying they searched around and got what they believe to be at least a consensus among the body of experts they trust.

Like I agree the problem sucks but I have no idea what a solution looks like. For fields someone is totally unfamiliar with they simultaneously don't have enough knowledge to evaluate the truth of a claim nor the knowledge to evaluate if someone is qualified and trustworthy enough to believe them. It's turtles all the way down -- especially because topics of any interest you can find as many experts as you care to of whatever qualification you demand making all sorts of contradictory claims.


> This is only a problem for someone terminally online.

Is it? Even those whose social life is entirely IRL, they still have to increasingly interact with various businesses, banks, healthcare providers, the government, and often more distant collegues through online services. Do I want these to go through LLM chatbots? No. Can I ensure that I'm speaking to an actual human if the communication is text based? Not really.


This is a problem for anyone who is not actively vigilant about the information they consume. A family member (who I would not describe as "terminally online") came to me today in a panic talking about how some major event had just occurred and how social order was beginning to collapse. I quickly glanced at the headlines on a few major news outlets and realized that they just saw some incendiary content designed to elicit that reaction. I calmed them down and walked them through a process they could use to evaluate information like that in the future, and they were a little embarrassed.

The concern isn't necessarily for you. It's for the large swaths of people who are less equipped to filter through noise like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: