I knew guy who made "generated" text books in 2010. He would absorb several articles, and loosely stitch them into chapters with some computer scripts and from memory. In a week he would produce 400 pages on new subject. It was mostly coherent and factual (it kept references). Usually it was the only book on market about given subject (like rare disease).
LLMs only ever accidentally generate useful content. They fundamentally can't know whether the things they're outputting are true, they just tend to be, because the training data also tends to be.
For several years now, Amazon KDP will block books whose content is already available on the web. I have printed a few books whose content was either CC-BY or public domain due to its age, and in each case my book was automatically blocked in the early stages. I had to submit an appeal that was reviewed by a person in order to proceed.
Current auto generated garbage is very different.