My wife and I took a road trip that included time in SF last year and seeing a Waymo was pretty neat.
To save some money, we stayed in downtown Oakland and took the BART into San Francisco. After getting ice cream at the Ghirardelli Chocolate shop, we were headed to Pier 39. My wife has a bad ankle and can't walk very far before needing a break to sit, and we could have taken another bus, we decided to take a Waymo for the novelty of it. It felt like being in the future.
I own a Tesla and have had trials of FSD, but being in a car that was ACTUALLY autonomous and didn't merely pretend to be was amazing. For that short ride of 7 city blocks, it was like being in a sci-fi film.
The company selling the car is adamant that none of their cars are fully autonomous in every single legal or regularity context. Any accident caused by the car is 100% the fault of the driver. But the company markets their cars as fully autonomous. That's pretty much the definition of pretending to be autonomous.
I think we are at the point where the data suggests they bear more risk when they drive the tesla themselves. See the bloomburg report on accidents per mile.
Are you referring to the one where a Waymo, and several other cars, were stopped at a traffic light, when another car (incidentally, a Tesla) barreled into the traffic stack at 90 MPH, killing several people?
Because I am not aware of any other fatal accidents where a Waymo was even slightly involved. I think it's, at best, misleading to refer to that in the same sentence as FSD-involved fatalities where FSD was the direct cause.
They key difference is that the Teslas killed their passengers, the Waymo hit someone outside the car (and it wasn't the Waymo's fault, it was hit by another car).
Yes. [1] That incident got considerable publicity in the San Francisco media. But not because of the Waymo.[2][3]
Someone was driving a Tesla on I-280 into SF. They'd previously been involved in a hit-and-run accident on the freeway. They exited I-280 at the 6th St. off ramp, which is
a long straightaway. They entered surface streets at 98 MPH in a 25 MPH zone, ran through a red light, and reached the next intersection, where traffic was stopped at a red light. The Tesla Model Y plowed into a lane of stopped cars, killing one person and one dog, injuring seven others, and demolishing at least six vehicles. One of the vehicles waiting was a Waymo, which had no one on board at the time.
The driver of the Tesla claims their brakes failed. "Police on Monday booked Zheng on one count of felony vehicular manslaughter, reckless driving causing injury, felony vandalism and speeding."[2]
The question should be less who was at fault and more would a human driver have reacted better in that situation and avoided the fatality. I'm not sure why you think that whether the fatality occurred inside or outside of the car changes the calculus, but in that case only one of the two documented Tesla FSD-related fatalities killed the driver. Judging by the incident statistics of Tesla's Autopilot going back over half a decade, I'm pretty sure it's significantly safer than the average human driver and continues to improve, and the point of comparison in the original post was with human driving rather than Waymo. I have no doubt that Waymo, with its constrained operating areas and parameters, is safer in aggregate than Tesla's general-purpose FSD system.
FSD is not Autopilot despite the names being conflated today, but even if you want to count all 28, it's not enough to compare raw numbers of fatal incidents without considering the difference in scale. That's not to justify taking your eyes off the road when enabling FSD on a Tesla, but the OP did not suggest that either anyway.
There is no world in which New York lets Teslas drive autonomously in the next decade. Had they not been grandfathered in in California, I doubt politics there would have allowed it either.
Are you trying to draw a distinction between sleeping versus looking away from the road and not paying attention to it? I expect both situations to have similar results with similar levels of danger in a Tesla, and the latter is the bare minimum for autonomous/unattended.
No, rather, if the manufacturer of the self-driving software doesn't take full legal liability for actions taken by the car, then it's not autonomous. This is the once and final criterion for a self-driving vehicle.
Right now, Tesla skirts legal liability by saying that the driver needs to watch the road and be ready to take control, and then uses measures like detecting your hand on the wheel and tracking your gaze to make sure you're watching the road. If a car driving on FSD crashes, Tesla will say it's the driver's fault for not monitoring the drive.
Hell, they'll even hold a press conference touting (selective data from) telemetry to say "The vehicle had been warning him to pay attention prior to the accident!"
And then four months later when the actual accident investigation comes out, you'll find out that yes, it had. Once. Eighteen minutes prior to the accident.
And then to add insult to that, for a very long time, Tesla would fight you for access to your own telemetry data if, for example, it was needed in a lawsuit against someone else for an accident.
To save some money, we stayed in downtown Oakland and took the BART into San Francisco. After getting ice cream at the Ghirardelli Chocolate shop, we were headed to Pier 39. My wife has a bad ankle and can't walk very far before needing a break to sit, and we could have taken another bus, we decided to take a Waymo for the novelty of it. It felt like being in the future.
I own a Tesla and have had trials of FSD, but being in a car that was ACTUALLY autonomous and didn't merely pretend to be was amazing. For that short ride of 7 city blocks, it was like being in a sci-fi film.