We have the same generative ability, but we gradually learn the correct answers (not necessarily truths), so we don't go around generating incorrect ones (not necessarily falsehoods). Then we know when we don't have any answers, so generating one is fruitless.
LLMs is just generation. Whatever pattern they embedded, they will happily extrapolate and add wrong information than just use it as a meta model.
LLMs is just generation. Whatever pattern they embedded, they will happily extrapolate and add wrong information than just use it as a meta model.