Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that very well could be true, depends on how that generality was obtained.

I would not be surprised if a multi-modal LLM (basically current architecture) could be wired up to be as general as a cat with current param count, and with the spark of human creativity (AGI/ASI) still ending up being far away.

But if you made a new architecture that solved the generalization problem (ie baking in a world model, self-symbol, etc) but only reached cat intelligence, then it would seem very likely that human-level was soon to follow.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: