Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People die in car crashes all the time. Self driving can kill a lot of people and still be vastly better than humans.




But who gets the ticket when a self-driving car is at fault?

> who gets the ticket when a self-driving car is at fault?

Whoever was in control. This isn’t some weird legal quagmire anymore, these cars are on the road.


Apparently it IS still a legal conundrum: https://www.motortrend.com/news/who-gets-a-ticket-when-a-way...

And will continue to be until every municipality implements laws about it.


> it IS still a legal conundrum

It’s not a conundrum as much as an implementation detail. We’ve decided to hold Waymo accountable. We’re just ticking the boxes around doing that (none of which involve confusion around Waymo being responsible).


So how many violations before Waymo's driver's license is suspended?

The point of self driving is that the car is in control. Are you going to send the car to car prison?

Personally, I'd argue that if the AI killed someone due to being incompetent (as in, a human in a fit state to drive would not have made this mistake), the punishment should go to the corporation that signed off on the AI passing all relevant tests.

The nature of the punishment does not necessarily follow the same rules as for human incompetence, e.g. if the error occurs due to some surprising combination of circumstances that no reasonable tester would have thought to test, which I can't really give an example of because anything I can think of is absolutely something a reasonable tester would have thought to test, but for the sake of talking about it without taking this too seriously consider if a celebrity is crossing a road while a large poster of their own face is right behind them.


Let me re-iterate my original caution: human drivers are really bad: more than 40,000 people die in car crashes every year! If a self driving cars makes mistakes that humans would not in some cases, but overall they would only cause 30,000 deaths per year then I want self driving required. Thus I want liability to reflect not perfection is required but that they are better than humans.

Don't get me wrong, perfection should be the long term goal. However I will settle for less than perfection today so long as it is better.

Though better is itself hard to figure out - drunk (or otherwise impaired drivers) are a significant factor in car deaths, as is bad weather when self driving currently doesn't operate at all. Statistics do need to make sure self driving cars are better than non-impaired drivers in all situations where humans driver before they can claim better. (I know some data is collected, but so far I haven't seen any independent analysis. The potentially biased analysis looks good though - but again it is missing all weather conditions)


These are marginal numbers. This would make AI worse than the safe driver.

The benefits of self-driving should be inrefutable before requiring it. At least x10 better than human drivers.


The AI's benefits should be irrefutable, but this isn't as simple as "at least x10 better than human drivers", or any fixed factor, it's that whatever mistakes they do make, if you show the video of a crash to the general public, the public generally agrees they'd have also crashed under those conditions.

Right now… Tesla likes to show off stats that suggest accidents go down while their software is active, but then we see videos like this, and go "no sane human would ever do this", and it does not make people feel comfortable with the tech: https://electrek.co/2025/05/23/tesla-full-self-driving-veers...

Every single way the human vision system fails, if an AI also makes that mistake, it won't get blamed for it. If it solves every single one of those perception errors we're vulnerable to (what colour is that dress, is that a duck or a rabbit, is that an old woman close up facing us or a young woman from a distance looking away from us, etc.) but also brings in a few new failure modes we don't have, it won't get trusted.


x10 improvement is the minimum bar after which a conversation can start. We should not even have a conversation until this threshold is reached.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: