> In ten years, probably no human can compete with AI drivers anymore.
That's what they said 10 years ago. Sooner or later people will say it and be right, but the last few percent of any problem is a lot harder than people give it credit for. It may not be that hard to stay in a lane or write a little code, and that may look like it's doing most of the job, but those common tasks are just the easy part.
10 years ago, a lot of FSD was still manually written code. Manually written code gets harder and harder to improve, the larger the codebase.
Now it is all NNs and therefore will scale with more data and more compute. Which is increasing exponentially. So far it seems like they are not hitting diminishing returns.
Yeah, and plain Q learning is able to iteratively improve a policy in any environment. Every single loop leading to improvement, and yet it hasn't really solved much of anything since the 60s (just some toy problems).
My point being, we don't know where the asymptote lies. Computers have had self improving algorithms since the 60s, and people have been making the same bold claims, like you, that because an iterative process for improvement has been discovered, we're close to super human AI since the 60s too.
Fwiw, I work in home robotics, but have no experience in self driving. My halfway-naive belief is that self-driving is easier than getting useful home robots —in fact I feel it’s not even a close comparison. Some reasons:
- The home is a very unstructured environment, whereas roads have at least _some structure_, and perhaps ~70% of the most useful roads even have clear lane markings and other signs.
- People already know that roads are dangerous, and there’s an expectation that babies won’t suddenly crawl in front of cars. This doesn’t exist in the home
- People are more comfortable being recorded on roads and highways than in their own homes, so you can get training data more easily for self driving.
- to do something useful in the home, imo you need to solve navigation _and_ complicated manipulation problems. For self driving, you only need to solve the navigation problem.
- (this is speculation on my part) Customers will happily pay 10k-20k extra for a self-driving car, and there are industries in which even more cost makes sense. Customers are less likely to pay that for a robot that does your chores
Would be very interested to hear the perspective of someone that works on self-driving
>to do something useful in the home, imo you need to solve navigation _and_ complicated manipulation problems. For self driving, you only need to solve the navigation problem.
Right. It can be challenging to figure out how fast, what lane, should I brake, etc. in many cities. But there are really only a few things the car can control. And its objectives are pretty simple: Obey the law, don't hit anything (and avoid being hit), and get to point B.
By contrast, think of all the different types of manipulation you need to clean up around the house and the 100 judgements you make you decide what needs to be cleaned--which will vary by person.
>Customers will happily pay 10k-20k extra for a self-driving car, and there are industries in which even more cost makes sense. Customers are less likely to pay that for a robot that does your chores
It would be at least an upper middle class purchase at that level but it depends how generally useful it was. People pay thousands of dollars a year for a housekeeper to come by.
Yes this is my point - the home is a hard place to operate in but less potential for lethal outcomes. If we can solve home robotics I think cars would be easier.
Also, a robot that replaces a housekeeper would have a huge market. I’d pay a handsome sum to have perfectly cleaned kitchen and bathrooms every day when I wake up.
For clarity, I’ll call out the areas where I think we disagree:
> “the home … [has] less potential for lethal outcomes.”
I don’t think this is true. Roads already have systems in place to make them safer, and people are aware of the dangers. This isn’t the case at home, and useful home robots certainly have the ability to cause serious injuries/deaths
> “If we can solve home robotics I think cars would be easier”
I also think cars are easier. However, I think this is _why_ we’ve made more progress towards solving self driving.
> “I’d pay a handsome sum to have perfectly cleaned kitchen and bathrooms every day when I wake up.”
When you say “perfectly cleaned rooms”, I think “better than you can get with a 90th percentile hired cleaner”. I suspect useful home robots might be 10 years out, but I’m doubtful we’ll get “perfectly cleaned rooms” from a commercial home robot and using the above criteria within even the next 50 years. Maybe controversial, but I think AGI might be easier, lol
My main thing with road safety is the presence of giant dangerous SUV which one has no control over. At least I can control what is or isn’t in my home, on the roads some asshole driving their Cybertruck at 40 mph over the limit will annihilate my hatchback. Point taken regardless, but I still worry more about cars than anything in my home.
Otherwise I have a small child in the house, so I’d be grateful for 1 percentile capability at the moment. ;-)
Thanks for your thoughts tho, I think we can agree future seems interesting at the least.
> “Driving” is solved. Driving with humans on the road - doing unpredictable human things - is far off still.
Plus there's serious questions about liability with self driving cars which are still unresolved in most of the world - if the goal is to have vehicles operate themselves with no human supervision, who goes to jail when they kill someone? Despite all of the progress that's been made with AI it's mostly been in low-stakes problems where failure isn't a big deal, so we don't have a consensus on what we're supposed to do when a neural network negligently obliterates a person because some logistics company wanted to save a few bucks on driver salaries.
The answer almost certainly has to be the manufacturer. I'm sure not responsible if my properly maintained and used self-driving car kills someone. That said, it's a novel area that doesn't have a clear analog to other products today.
There's also the question of incident response, if a human driver "malfunctions" you take them out of service and the rest of the world keeps going, but if a self-driving model malfunctions there are potentially millions of vehicles running the same software ready to make exactly the same mistake until the issue is isolated and fixed. Should we ground the entire fleet of vehicles running that software until the issue is resolved and software re-certified, if the software is demonstrably dangerous? How much would that cost?
“Driving” is not solved unless you mean perfectly paved streets in perfect weather on empty streets with no pedestrians. A competent solution like Waymo can handle significantly more complex cases at real world levels of complexity, but it is still unclear how comprehensive and robust that really is across the massive complexity of reality even without other cars on the road. There is simply not enough data, and no independent audits yet.
It is prudent to remain cautiously optimistic that the evidence will bear out in time, but not assert unsupported claims.
Precisely my point, I’m talking about the DARPA grand challenge era of “look this car drives itself” being the “solved” part. I’d you cleared all the roads and left street signs and stoplights I’m sure most self-driving cars would be fine.
People got way overconfident once the grand challenges were accomplished.
I think your guess that home robotics will be solving problems before self-driving cars git gud will be disproven (industrial robotics have been delivering value for five decades at least).
Home robotics has to solve two problems: the robot and operating the robot ~perfectly. Self-driving cars already have cars, which are waldos, if you squint. What sort of sensors should be added is up for debate but the actuation mechanism is a solved problem, and a very simple one, cars have three linear inputs and two binary ones for the turn signals. Technically a few more but none of them are any less trivial.
There's less risk of a fatality when Rosie Robot knocks over the vase you inherited from your grandmother, but people are no more tolerant of that kind of failure in home robots than they are in cars.
And cleaning a house isn't one task. It's a whole slew of different tasks which, given some basic instructions, my housekeeper can handle easily without supervision. And there's quite a bit of common sense required.
But the average hour is not driven by the average driver, better drivers drive more hours, so it has to be better than the average driver in order to result in fewer incidents.
There’s no reason better drivers drive more hours. You can be a shit semi driver. That being said, it’s an irrelevant comment. If it’s at average now this is the worst it can get and it’s already way above average.
I don’t disagree that self driving is good and will eventually get there. Just pointing out a flaw in your reasoning.
Yes, such a person can exist, the hypothetical counter example doesn’t disprove the general statement. I think it’s safe to say that in general the more time someone spends driving the better driver they are.
That's what they said 10 years ago. Sooner or later people will say it and be right, but the last few percent of any problem is a lot harder than people give it credit for. It may not be that hard to stay in a lane or write a little code, and that may look like it's doing most of the job, but those common tasks are just the easy part.