Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is not really some distinct pathology with hallucinations, its just how wrong answers (e.g. inaccuracies / faulty token prediction chains) manifest in the case of LLMs. In the case of a linear regression, a "hallucination" is when the predicted value was far from the actual value for a given sample.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: