For background imagery in in-house presentations, I have switched almost entirely to genAI artwork. Previously, I would use a combination of licensed stock image services and small commissioned pieces on something like fiverr.
Now, I get faster turnaround and more choice/control in a self-service fashion. Do the resulting images hit the exact same highest possible high notes that a $20 or $50 piece of fiverr would? No, but they're usually 95+% as good and I get them in 2-3 minutes and can rev them as much as I care to.
I used to work in games; for game artwork, I suspect that creating custom in-house tooling may allow N/3 artists to do the same work as N artists could back in 2010. If it's even 3N/4, game studios will make that investment.
From what I've seen generators are good for routine job. Like generating the background. I used it go generate illustrations to the texts. It works well. Short story just looks better when there is an image.
> LLMs translate textual descriptions and are part of GenAI compute.
You are talking about embeddings. This is a different things. It's when model generates binary presentation (embedding) of the prompt given. Then this embedding is used to condition generator's output.
LLM is usually a text model which can predict next words, in its basic form. After tuning it can do more, like answer the question, follow instructions.
So, models used by generators aren't exactly LLMs. With one exception that I know: ChatGPT processes prompt before sending it to DALLE-3 generator. Which then makes embedding off it.
You are right. Anyway, I was intending to respond not necessarily to the LLM part of the post, but to the AI "don't compete with artists" -part. My point being: I think current generative AI capabilities to be VERY much competing vs artists (or at least people, who "manufacture art", like creating illustrations, graphics etc.)