> OpenAI are working on watermarking their generated content so other platforms can detect it automatically
So when is this technology gonna be advanced enough for us mere mortals to run it on our own computers with our own models free from pointless restrictions that exist purely to appease threatened stakeholders?
Trained on what though? Procuring training material ain't no trivial feat., in-fact it might be even more elaborate to do. This is where Google has home advantage. They have their own copy of the internet.
To be fair, training is a lot more intensive than inference. Though if open source models are any indication, the big issue is actually VRAM requirements.
The restrictions are not pointless. People shouldn't have to endure AI generated spam, no matter how well-formatted.
I can opt out of robocalls, email spam, programmatic texts, even take legal action against people who make them outside the law. Similar legal protections should exist to protect people from AI spam. If there's a good technical solution to this problem then the laws banning it become less necessary.
So when is this technology gonna be advanced enough for us mere mortals to run it on our own computers with our own models free from pointless restrictions that exist purely to appease threatened stakeholders?