Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Those books, blogs, courses, etc are at least trying to be correct and provide you with reliable information on a topic.

On topics I know really well, ChatGPT is wrong more often that courses, blogs, books, etc. However, I don't think it's prudent to put them in two different categories, where human books are reliable and LLM answers aren't. Both make many mistakes, with a difference in degree that currently favors human books, but without either really being in a different category.

Newly published books (let alone blogs) are frequently less reliable than even Wikipedia. They are written by a handful of authors at most, get a limited period of review, and the first edition is then unleashed upon unsuspecting students until the errata list grows long enough that a 4th edition is needed.

The prime directive for LLMs with RLHF is a combination of giving the answer that best completes the prompt, and giving the answer people want to hear. The prime directive for authors is a combination of selling a lot of books, not expending so much time and energy writing the book that it won't be profitable, and not making so many mistakes that it damages their reputation.

Neither books, blogs, nor ChatGPT have any obligation to be correct. Either way, the content being reinforced (whether through money, or through training) is not The Truth straight from the Platonic Realm, but whatever the readers consider themselves satisfied with.



> and not making so many mistakes that it damages their reputation.

And that's the difference! Human authors are incentivized to return reliable information. Reliability is not ChatGPT's concern at all, believability is. It can't even cite a source!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: