Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "It is basically like having an incredibly smart engineer/scientists/philosopher/etc that can explain things quite well, but for pretty much every field. Does this "person" make mistakes?"

Does this person stubbornly insist on being right, even if they are wrong?

Likely, that's just how humans usually behave - and ChatGPT does the same.

At least that was my thinking when people complain about ChatGPT being a know-it-all bluffer.



> Does this person stubbornly insist on being right, even if they are wrong?

Actually ChatGPT will almost always change tune if you tell it that it is wrong. That's because it is a stochastic parrot and that's an "unlikely" event if you actually got the answer right. But we'll see if this gets updated/broken too.


I've seen ChatGPT change tune but I've also seen it squirm plenty. Overall I think it behaves uncannily human in this regard.


ChatGPT is pretty amenable to being corrected if it says something you know is crap. And then, in my experience, you usually get better results.

That is not something I can say for most humans.


Only if you correct a wrong answer. If you "correct" a right answer ("I don't think that's right") then it gives an even worse answer. So be careful when correcting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: