Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a human, I oftentimes can solidify ideas by writing them out and editing my writing in a way that wouldn’t really work if I could only speak them aloud a word at a time, in order.

And before we go to “the token predictor could compensate for that…” maybe we should consider that the reason this is the case is because intelligence isn’t actually something that can be modeled with strings/tokens.



Yann LeCun discussed why LLMs are not enough for AGI on Lex Fridman pod: https://youtu.be/5t1vTLU7s40?t=138


I really liked the simplicity of his explanation in information theory terms. Thank you!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: