Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like that term much better, confabulation. I’ve come to think of it as it relies on an inherent trust in the fact that whatever process it uses to produce a coherent response (which I don’t think the LLM can really analyze after the fact) is inherently a truth-making process, since it trusts inherently its training data and considers that the basis of all its responses. Something along those lines. We might do something similar at times as humans, it feels similar to how some people get trapped in lies and almost equate what they have said as true with having the quality of truth as a result of them having claimed it as true (pathological liars can demonstrate this kind of thinking).


> since it trusts inherently its training data and considers that the basis of all its responses.

Doesn't that make "hallucination" the better term? The LLM is "seeing" something in the data that isn't actually reflected in reality. Whereas "confabulation" would imply that LLMs are creating data out of "thin air", which leaves the training data to be immaterial.

Both words, as they have been historically used, need to be stretched really far to fit an artificial creation that bears no resemblance to what those words were used to describe, so, I mean, any word is as good as any other at that point, but "hallucination" requires less stretching. So I am curious about why you like "confabulation" much better. Perhaps it simply has a better ring to your ear?

But, either way, these pained human analogies have grown tired. It is time to call it what it really is: Snorfleblat.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: