If autopilot created a situation in which a crash was inevitable with no warning to the user. E.g. if it suddenly grabbed the steering wheel and swerved right resulting in the car spinning out in only a few hundred ms for no good reason.
Even if autopilot caused a crash such as above, you still have to run a cost benefit analysis of whether or not it prevented more crashes than it causes. Considering the failure rate of humans it's ok for autopilot to have a non-negligible failure rate, and it doesn't really matter if those failures happen in the same places as the human failures.
Hands on the steering wheel doesn't mean affirmative control. That's why the example includes putting the car in an unrecoverable scenario faster than an attentive person can react.
Well, Tesla puts disclaimers that their "assistant" demands the driver to always be paying attention so they would accept blame could only if the system completely ignores correct user input.
I don't know if there is any instance where it would completely ignore user input and Tesla agrees the users wasn't at fault.
No matter the name of the feature, I think it can only be blamed if it had totally removed control from the driver. This accident would've been avoided had the driver been paying attention.
Cars are heavy death machines, not apps, and I have zero sympathy for people who take them for granted.