1. Check frequency (between every single time and spot checks).
2. Check thoroughness (between antagonistic in-depth vs high level).
I'd agree that, if you're towards the end of both dimensions, the system is not generating any value.
A lot of folks are taking calculated (or I guess in some cases, reckless) risks right now, by moving one or both of those dimensions. I'd argue that in many situations, the risk is small and worth it. In many others, not so much.
That is politics. Not engineering.
Assigning a human to "check the output every time" and blaming them for the faults in the output is just assigning a scapegoat.
If you have to check the AI output every single time, the AI is pointless. You can just check immediately.