The point is that it is a philosophical problem, not a technical one. A human will see that it is unsolvable, whereas a machine will probably optimize for unethical or strictly cost-win payoffs without considering the philosophical parts.
I'm not saying the decision will be better when a human makes it, I'm saying it will be more humanely considered.
I'm not saying the decision will be better when a human makes it, I'm saying it will be more humanely considered.