Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there is another way to solve this. Someone should train an LLM on copyrighted images. Then use that as a second pass on any image generated by the primary LLM to check if it might contain copyrighted images, and blur the copyrighted parts(or change them sufficiently).

Another change could be to the license agreement of LLMs - they could have the user assume liability for any material produced instead of the provider assuming liability. The user would agree that getting the rights for any copies and distribution of copyrighted materials is their sole responsibility instead of the provider.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: