Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> OpenAI are working on watermarking their generated content so other platforms can detect it automatically

So when is this technology gonna be advanced enough for us mere mortals to run it on our own computers with our own models free from pointless restrictions that exist purely to appease threatened stakeholders?



According to this link that was posted on hn a couple days ago it currently costs around 10 million dollars to train something like GPT-3.

https://www.nextplatform.com/2022/12/01/counting-the-cost-of...


Trained on what though? Procuring training material ain't no trivial feat., in-fact it might be even more elaborate to do. This is where Google has home advantage. They have their own copy of the internet.


Read up on the datacenter OpenAI had to spin-up within Azure in order to train GPT3. A little out of reach for now - but in 10 years, who knows.


To be fair, training is a lot more intensive than inference. Though if open source models are any indication, the big issue is actually VRAM requirements.


The restrictions are not pointless. People shouldn't have to endure AI generated spam, no matter how well-formatted.

I can opt out of robocalls, email spam, programmatic texts, even take legal action against people who make them outside the law. Similar legal protections should exist to protect people from AI spam. If there's a good technical solution to this problem then the laws banning it become less necessary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: