Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It was mostly a tangential thought.

People could of course see a photo of a happy black person among 1000 photos of unhappy black people and say that person looks happy, and realize the LLM is wrong, because people's brains are pre-wired to perceive emotions from facial expressions. LLMs will pick up on any correlation in the training data and use that to make associations.

But in general, excepting ridiculous examples like that, if an LLM says something that a person agrees with, I think people will be inclined to (A) believe it and (B) not see any bias.



Is it ridiculous? It's just one example. There's probably millions more that are not about race-related emotions




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: