Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The pretraining is the knowledge, not the intelligence.

I thought the knowledge is the training set and the intelligence is the emergent/side effect of reproducing that knowledge by making sure the reproduction is not rote memorisation?



I'd say that it takes intelligence to encode knowledge, and the more knowledge you have, the more intelligently you can encode further knowledge, in a virtuous cycle. But once you have a data set of knowledge, there's nothing to emerge, there are no side effects. It just sits there doing nothing. The intelligence is in the algorithms that access that encoded knowledge to produce something else.


The data set is flawed, noisy, and its pieces are disconnected. It takes intelligence to correct its flaws and connect them parsimoniously.


It takes knowledge to even know they're flawed, noisy, and disconnected. There's no reason to "correct" anything, unless you have knowledge that applying previously "understood" data has in fact produced deficient results in some application.

That's reinforcement learning -- an algorithm, which requires accurate knowledge acquisition, to be effective.


Every statistical machine learning algorithm, including RL, deals with noisy data. The process of fitting aims to remove the sampling noise, revealing the population distribution, thereby compressing it into a model.

The argument being advanced is that intelligence is the proposal of more parsimonious models, aka compression.


I've lost track of what we're disagreeing about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: