Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow - can we coin "Slopbrain" for people who are so far gone into AI eventualism that they can no longer function? Liked "cooked" but "slopped" or something. Good grief lol. Talk about getting lost in the sauce...


WSJ has been writing increasingly about "AI Psychosis" (here's their most recent piece [0]).

I'm increasingly seeing that this is the real threat of AI. I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new. While not as dramatic, the normalization of the use of "AI as therapist" is equally concerning. I know tons of people that rely on LLMs to guide them in difficult family decisions, career decisions, etc on an almost daily basis. If I'm honest, I myself have had times where I've leaned into this too much. I've also had times where AI starts telling me how clever I am, but thankfully a lifetime of low self worth signals warning flags in my brain when I hear this stuff! For most people, there is real temptation to buy into the praise.

Seeing Karpathy claim he can't keep up was shocking. It also immediately raises the question to anyone with a clear head: "Wait, if even Karpathy cannot use these tools effectively... just what is so useful about AI?" Isn't the entire point of AI that I can merely describe my problem and have a solution in a fraction of the time.

The fact that so many true believers in AI seem to forever be just a few more tricks away from really unleashing this power, starts to make it feel very much like magical thinking on a huge scale.

The real danger of AI is that we're entering into an era of mass hallucination across multiple fields and areas of human activity.

0. https://www.wsj.com/tech/ai/ai-chatbot-psychosis-link-1abf9d...


> I've personally known people who have started to strain relationships with friends and family because they sincerely believe they are evolving into something new.

Cryptoboys did it first, please recognize their innovation ty


That's NOT AI psychosis, which is real, and which I've seen close-up.

AI psychosis is getting lost in the sauce and becoming too intimate with your ChatGPT instance, or believing it's something it's not.

Skepticism, or a fear of being outside the core loop is the exact opposite, and that's what Karpathy is talking about here. If anything, this kind of post is an indicator that you're absolutely NOT in AI psychosis.


"the core loop"? What is this?


Cyberpunk was right!


I would really like to hear more about these acquaintances who think they are evolving.


WSJ is Fox News Platinum, I wouldn't overthink it


I feel Karpathy is smart enough to deserve a less dismissive response than this.


A mix of "too clever by half" and "never meet your heroes".


Why do you feel that way?


You think we should appeal to authority rather than address the ideas on their own merits?


How is saying the author has “slopbrain” is “addressing the idea on its own merits”? It’s just name calling.


They aren't addressing my comment (which is obviously an overreaction to the tweet), he's asking you why we should appeal to authority rather than evaluate whether Karpathy is completely overreacting and in way too deep.


The intent of my comment was to state that you should write something more substantive than dismissing Karpathy as “slopbrain”. I wasn’t appealing to authority by saying that he was correct — just that he deserves more than name calling in a response.


Evidently by "LLM/AI psychosis" coming into the mainstream zeitgeist, "slopbrain" isn't too far off.


Now you're just saying "AI psychosis exists" (true) and then saying Karpathy has it. That is, again, essentially name calling, like saying someone is insane rather than addressing their points.

If you really think Karpathy is psychotic you should explain why, but I don't think anything in the Tweet suggests that. My read of his tweet is that there is a lot of churn and new concepts in the software engineering industry, and that doesn't seem like a very psychotic thing to say.


I call it being "oneshot" by the AI.


Twitter folks call this LLM or AI Psychosis.


We could call it "Hacker News syndrome"


Slopbrain is interesting because Karpathy's fallacious argumentation mirrors the glib argument of an LLM/AI, it's like cognitively recursive, one feeding the other in a self-selecting manner.


Slippery slop?


[flagged]


This is what I keep hearing. "You just need something more agentic" "if you had the context length you could've fixed that" etc etc. yeah sure. I'll believe it when I see it. For me it's parsing 3000 page manuals for relevant data. I can do it fairly competently from experience, but I see a lot of people not familiar with them struggle to extract the info they need, and AIs just cannot hold all that context in my experience




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: