If you believe (as many HNers do, although certainly not me) that LLMs have intelligence and awareness then you necessarily must also believe that the LLM is lying (call it hallucinating if you want).
If you ask chatgpt to tell a story of a liar it is able to do so. So while it doesn't have a motivated self to lie for it can imagine a motivated other to project the lie on.
Reminds me of recent paper where they found LLMs are scheming to meet certain goals; And that is a scientific paper done by a big lab. Are you referring from that context?
Words and their historical contexts aside, systems which are based on optimization can take actions which can appear like intermediate lying to us. When deepmind used to play those atari games - the agents started cheating but that was just optimisation wasn't it? similarly when a language based agent does a optimisation, what we might perceive it as is scheming/lying.
I will start believing that LLM is self aware when a research paper from a top lab like Deepmind/Anthropic put such a paper in a peer reviewed journal. Otherwise, it's just matrix multiplication to me so far.
IMO a much better framing is that the system was able to autocomplete stories/play-scripts. The document was already set up to contain a character that was a smart computer program with coincidentally the same name.
Then humans trick themselves into thinking the puppet-play is a conversation with the author.