Try taking any of the LLM models we have, and making it learn (adjust its weights) based on every interaction with it. You'll see it quickly devolves into meaninglessness. And yet we know for sure that this is what happens in our nervous system.
However, this doesn't mean in any way that an LLM might not produce the same or even superior output than a human would in certain very useful circumstances. It just means it functions fundamentally differently on the inside.
Maybe this is just a conversation about what "fundamentally differently" means then.
Obviously the brain isn't running an exact implementation of the attention paper, and your point about how the brain is more malleable than our current llms is a great point, but that just proves they aren't the same. I fully expect that future architectures will be more malleable, if you think that such hypothetical future architectures will be fundamentally different from the current ones then we agree..
However, this doesn't mean in any way that an LLM might not produce the same or even superior output than a human would in certain very useful circumstances. It just means it functions fundamentally differently on the inside.