Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you believe (as many HNers do, although certainly not me) that LLMs have intelligence and awareness then you necessarily must also believe that the LLM is lying (call it hallucinating if you want).


Intelligence is a prerequisite for lying, but its foundation is morality and agency.

To lie, you have to know that you are not telling the truth, and arguably have to be able to held accountable for that action.

It's easy to babble a series of untruths, but lying requires intention, which requires an entity that can be recognized as having intentions.

I'd argue that ChatGPT's lack of a cohesive self prevents it from lying, no matter how many untruths it creates.


If you ask chatgpt to tell a story of a liar it is able to do so. So while it doesn't have a motivated self to lie for it can imagine a motivated other to project the lie on.


Reminds me of recent paper where they found LLMs are scheming to meet certain goals; And that is a scientific paper done by a big lab. Are you referring from that context?

Words and their historical contexts aside, systems which are based on optimization can take actions which can appear like intermediate lying to us. When deepmind used to play those atari games - the agents started cheating but that was just optimisation wasn't it? similarly when a language based agent does a optimisation, what we might perceive it as is scheming/lying.

I will start believing that LLM is self aware when a research paper from a top lab like Deepmind/Anthropic put such a paper in a peer reviewed journal. Otherwise, it's just matrix multiplication to me so far.


> [paper claimed] LLMs are scheming

IMO a much better framing is that the system was able to autocomplete stories/play-scripts. The document was already set up to contain a character that was a smart computer program with coincidentally the same name.

Then humans trick themselves into thinking the puppet-play is a conversation with the author.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: