If there is a difference, and LLM's can do one but not the other...
>By that standard (and it is a good standard), none of these "AI" things are doing any thinking
>"Does it generalize past the training data" has been a pre-registered goalpost since before the attention transformer architecture came on the scene.
These LLM's just exhibited agency.
Swallow your pride.