Then you might run into the problem that chess has right now: if you happen to write in a style a GPT-3 trained model would write you will be declared a fraud. You might argue, who writes in a style of said model? Obviously: The corpus of data that trained the model. If you are one of them part of your writing ends up in there.
Maybe writing styles are transformed to be more in line with GPT-3 trained models, lets face it. AI writes good prose and people will read more and more generated content and thus adept the style.
"Photoshop for text" was the title of a HN post here just a few days
ago. Many commenters delighted at the idea of being able to highlight
a passage, pull down a menu and "Rewrite Style -> Ernest Hemingway".
Nice people generally only see the immediate utility of a tool in
front of them. It takes a certain mind-set to see the other
"weaponised" implications.
We knew it was coming once "Rooter: A Methodology for the Typical Unification of Access Points and Redundancy" was published back in 2005 (!!!) as a proof that peer review in certain "scientific" conferences and journals does not really exist. This article is a machine-generated nonsense generated by SCIgen. See https://en.wikipedia.org/wiki/SCIgen for more details.
I wonder how hard would it be to train a model that given a longer passage of text could tell with a high certainty that it’s machine generated?