The people talking about semantics in the comment section seems to completely ignore the positive correlation of LLMs between accuracy and stated confidence, this is called calibration and this "old" blog post from a year ago already showed it, LLMs can know what they know: https://openai.com/index/introducing-simpleqa/