The reason for both things is that the best models perform, at best, on the level of a recent graduate.
When would you hire a recent graduate in either role, if you could afford better?
These models are essentially the same models for both science and art, and it was a surprise to everyone that GPT-3 was able to turn into ChatGPT, or that Stable Diffusion was able to generalise so well with relatively few issues (even despite the occasional Cronenberg anatomy study). The flaws with the LLMs that prevent accurate science are the same flaws that cause object impermanence in written stories; the flaws that prevent image and video models from being physically plausible are the same flaws — incorrect world model — that cause them to be wrong about weather forecasts, chemistry, etc.
In both cases, increasing quality of AI raises the metaphorical water level, and in this example rising tides don't lift all boats, but instead drown (the careers of) people who can't swim. I don't have a fix for that, and I'm deeply skeptical that any of the suggestions from the AI firms will work — they're not economists, and even if they were (or even if they hired loads), if the AI companies are right, this change will be at least as big as the industrial revolution, which upended old economic models.
Unfortunately sometimes even experienced people make mistakes that a recent graduate should not do (but in practice sometimes does). AI models can help avoid mistakes that in hindsight were obvious and should never have happened.
The reason for both things is that the best models perform, at best, on the level of a recent graduate.
When would you hire a recent graduate in either role, if you could afford better?
These models are essentially the same models for both science and art, and it was a surprise to everyone that GPT-3 was able to turn into ChatGPT, or that Stable Diffusion was able to generalise so well with relatively few issues (even despite the occasional Cronenberg anatomy study). The flaws with the LLMs that prevent accurate science are the same flaws that cause object impermanence in written stories; the flaws that prevent image and video models from being physically plausible are the same flaws — incorrect world model — that cause them to be wrong about weather forecasts, chemistry, etc.
In both cases, increasing quality of AI raises the metaphorical water level, and in this example rising tides don't lift all boats, but instead drown (the careers of) people who can't swim. I don't have a fix for that, and I'm deeply skeptical that any of the suggestions from the AI firms will work — they're not economists, and even if they were (or even if they hired loads), if the AI companies are right, this change will be at least as big as the industrial revolution, which upended old economic models.