Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What you are saying is that AI is like a stuffed toy animal.

Next week it is exactly like it is this week.



Yes good analogy. For LLMs they are pretrained then can't learn anything new. We can make it appear they do with RAG and other smoke and mirrors. Those smoke and mirrors are useful as a tool, but the AI doesnt learn.


Realistically, that’s not far from how a human brain works - we rely on a deep corpus of pre-learned patterns (largely set in early childhood) and continually refresh it with new inputs held in short-term memory, reinforced through repetition. If LLMs reach the point where they can integrate their "short-term" context (RAG, etc.) into updated long-term weights more regularly, they’d be functionally simulating that aspect of human cognition.

(Disclaimer: I am not a neuroscientist. The model is massively simplified. But I believe the broad strokes are accurate.)


As a human I can learn to play my first instrument (piano) in my late 30’s. I’m also learning Japanese, with an “alphabet” and structure entirely unlike any other language I know. I got my gun license last year and am doing competitive one-hand pistol shooting.

These things in isolation might seem like “RAG+” but in total they’ve reshaped a lot of my thought patterns and physical aspects as well. Piano has improved motor functions, pistol shooting has vastly decreased time to focus and increased breathing control, and Japanese has allowed me to think about the world and how to describe it mentally in entirely new ways.

I think it’s easy to fall into a trap of undervaluing our brain and body until we actually fully use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: