Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

a classic example of how LLM's mislead people. They don't know right from wrong, they know what they have been trained on. Even with reasoning capabilities


That's one of my biggest hang ups on the LLMs to AGI hype pipeline, no matter how much training and tweaking we throw at them they still don't seem to be able to not fall back to repeating common misconceptions found in their training data. If they're supposed to be PhD level collaborators I would expect better from them.

Not to say they can't be useful tools but they fall into the same basic traps and issues despite our continues attempts to improve them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: