Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs aren't randomly generated though, they are shaped by training data. This means there would, in principle, be a comparable way to synthesize an equivalent assembly program from that same training data.

The difference here is that it's just more obvious how to do this in one case than the other.

My point was only that 1) neural networks are sufficient, even if real neurons have additional complexity, and 2) whatever that additional complexity, artificial neural networks can learn to reproduce it.



I understand that, what I am saying though is the fact that they can doesn't mean that they will by simply scaling their number. It still entirely depends on how they are trained/arranged, meaning it may take a completely different way of composing/glueing neurons together to stimulate any additional complexity. Its like saying a nand gate is turing complete, I put 1000000000 of them in a series, but its not doing anything, what gives, do I need to add a billion more?

Just as a modeling and running a single neuron takes x amount of transistors configured in a very specific way for example, it may take y amount of neurons arranged in some very specific, unknown to model something that has extra properties.

And its not clear either whether neurons are fundamentally the correct approach to reach this higher level construction than some other kind of node.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: