Literally any time an AI company talks about safety they are doing marketing. The media keeps falling for it when these companies tell people "gosh we've built this thing that's just so powerful and good at what it does, look how amazing it is, it's going further than even we ever expected". It's so utterly transparent but people keep falling for it.
Do you have any actual proof of your assertion? Anthropic in particular has been more willing to walk the walk than the other labs and AI safety was on the minds of many in the space long before money came in.
Anthropic, the company who recently announced you're no longer allowed to hurt the model's feelings because they believe (or rather want you to believe) that it's a real conscious being.
That is not an accurate characterization and you know it. Engaging in bad faith is against HN rules.
Furthermore, not exploring even the mere possibility of pain and suffering in the brain your laboratory is growing is morally reckless. Anthropic is doing the right thing here and they should not listen to the naysayers.