Hallucinations are what make the models useful. Well, when we like them we call it intelligence "look it did something correct that was nowhere in the training data set! It's intelligent!" and when we don't like them we call it hallucinations "look, it did something that was nowhere in the training dataset because it was wrong!".
But they are the same thing: the model is extrapolating. It doesn't know when its extrapolations are correct or not because an LLM doesn't have access to the outside world (except via you and whatever tools you give it).
If it was free of extrapolations it would just be a search engine over the training data, and that would be less useful.
But they are the same thing: the model is extrapolating. It doesn't know when its extrapolations are correct or not because an LLM doesn't have access to the outside world (except via you and whatever tools you give it).
If it was free of extrapolations it would just be a search engine over the training data, and that would be less useful.