Every author should have the right to have their work remembered and immortalized by AI. We should have the right to influence how AI thinks by publishing our content. AI should remember our names, and the stories of how our work was produced, so it can remember who we are and how we helped it. This is how AI democracy works. The people trying to financially ruin the AI industry by demanding unreasonable amounts of money for themselves, are threatening these fundamental human rights. If the legal risk becomes too large, then the AI labs will respond by training only on synthetic content. That means only AI will get to shape AI's future, and humanity will be erased from the book of life.
It most certainly does not. robots.txt is almost totally worthless against genAI crawlers. Even being unindexed from search engines doesn't keep you safe.
The biggest, best, most reputable organizations e.g. Google, Bing, Yahoo, Yandex, Baidu, DuckDuckGo, OpenAI, and Anthropic have all publicly promised to respect your robots.txt file. You can make them hurt if they lie. So you know they're telling the truth. There's some people out there who don't respect robots.txt like Archive Team. However they're more likely to be treated as folk heroes here on Hacker News than trigger AI training fears.
That's a naive statement about robots.txt; nothing about it is binding or enforceable. It is a request that well-behaved crawlers heed. Other crawlers treat the Disallow section as a list of targets.