Intelligence breaks the pattern here. A simulated intelligence is intelligent, just as simulated math is math and simulated computers are computers. The point of contention shouldn't be whether LLMs are intelligences or simulated intelligences, but whether they're simulating something else.
I think a challenge with the simulated-is-real math/calculator argument is that the simulation operates syntactically thru derivation without meaning.
E.g. a simulation of ZF set theory cannot tell you the truth value of the Axiom of Choice - because it’s independent of the ZF axioms (it is undecidable in the Gödel incompleteness sense).
But “Although originally controversial, the axiom of choice is now used without reservation by most mathematicians” [1] - I guess it’s truth is self-evident semantically.
So because of incompleteness, simulated math/calc will always be “missing” something.
Of course a LLM will happily say A of C is true (or not) but is it just parroting from the dataset or hallucinating?