Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Likely because they've seen a lot of the potential abuse capabilities. i.e. the "generate a drivers license with this face".

So the options are: 1) nerf the model so it can't produce images like that, or 2) use some type of KYC verification.



The model is already pretty lobotomized refusing even mundane requests randomly.

Upload a picture of a friend -> OK. Upload my own picture -> I can't generate anything involving real people.

Also after they enabled global chat memory I started seeing my other chats leaking into the images as literal text. Disabled it since.


Yep - the API lets you lower the moderation which I observed allows for more violent and graphic prompts, but it still exists and will often reject if you reference popular figures/etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: