The quality is absolutely part of the issue. Imagine the difference between a nude stick figure labeled your mom, and a photorealistic, explicit deepfake of your mom.
Well also in context the stick figure could still constitute sexual harassment.
If a big boobed stick figure with a label saying "<coworker name>" was being posted on your social media a lot such that people could clearly interpret who you were talking about, there would be a case for harassment but also you'd probably just get fired anyway.
Yes, but in that case everyone would understand the image is a crude depiction of someone—judging the poster—and not a real photograph—judging and embarrasing the target.
Well, if we just guarantee that we put "AI Generated" at the bottom of those images, it will be clear it's not a real photograph, and then this problem disappears?
It’s impossible to guarantee that. As soon as you add that message, someone will build a solution to remove the message. That’s exactly what happened with OpenAi’s Sora.
That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."
Yes, you should be banned for undressing people without consent and posting it on a busy social media site.