Not the op but to me personally: yes. Facial structure, lips, eyes.. The configuration tilts towards an expression that I interpret differently. A friend of mine is Asian, I've learned to be better at it, but to me he at first looked like having flatter affect than average.. People of color look more naive than average to me, across the board, probably due to their facial features. I perceive them as having less tension in the face I think (which is interesting now that I think about it)
I have a background in East Asian cultural studies. A lot more expressions are done via the eyes there rather than the mouth. For the uninitiated, it's subtle, but once you get used to it, it becomes more obvious.
Anthropologists call that display rules and encoding differences. Cultures don’t just express emotion differently, but they also read it differently. A Japanese smile can be social camouflage, while an American smile signals approachability. I guess that's why western animation over-emphasizes the mouth, while eastern animation tend to over-emphasize the eyes.
Why would Yakutian, Indio or Namib populations not have similar phenomeon an AI (or a stereotypical white westerner who does not excessively study those societies/cultures) would not immediately recognise?
AI trained on Western facial databases inherits those perceptual shortcuts. It "learns" to detect happiness by wide mouths and visible teeth, sadness by drooping lips - so anything outside that grammar registers as neutral or misclassified.
And it gets reinforced by (western) users: a hypothetical 'perfect' face-emotion-identification AI would probably be percieved a less reliable to the white western user than the one that mirrors the biasses.