Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Automation makes it easy for everyone to do it, on demand.

That's fundamentally different to "You can make this thing if you're fairly skilled and - for some kinds of images - have specialist tools."

Yes, you should be banned for undressing people without consent and posting it on a busy social media site.





Why would I need to be skilled? Isn't the issue the content not the quality?

The quality is absolutely part of the issue. Imagine the difference between a nude stick figure labeled your mom, and a photorealistic, explicit deepfake of your mom.

Do you find the two equally objectionable?


Well also in context the stick figure could still constitute sexual harassment.

If a big boobed stick figure with a label saying "<coworker name>" was being posted on your social media a lot such that people could clearly interpret who you were talking about, there would be a case for harassment but also you'd probably just get fired anyway.


Yes, but in that case everyone would understand the image is a crude depiction of someone—judging the poster—and not a real photograph—judging and embarrasing the target.

Well, if we just guarantee that we put "AI Generated" at the bottom of those images, it will be clear it's not a real photograph, and then this problem disappears?

It’s impossible to guarantee that. As soon as you add that message, someone will build a solution to remove the message. That’s exactly what happened with OpenAi’s Sora.

You've avoiding the question. Assume there is a technical solution that makes these generates images always obvious as generated.

Where is the actual problem?

Is it that it's realistic? Or that the behavior of the person creating it is harassing?

This is pretty straight forward.


It is pretty straightforward: the problem is both.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: