Is it correct Comma.AI sees this as the following statement below appears to say, or am I missing something? If so, why would anyone be using this product outside of a test environment that’s fully controlled?
—
“Any user of this software shall indemnify and hold harmless comma.ai, Inc. and its directors, officers, employees, agents, stockholders, affiliates, subcontractors and customers from and against all allegations, claims, actions, suits, demands, damages, liabilities, obligations, losses, settlements, judgments, costs and expenses (including without limitation attorneys’ fees and costs) which arise out of, relate to or result from any use of this software by user. THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. YOU ARE RESPONSIBLE FOR COMPLYING WITH LOCAL LAWS AND REGULATIONS. NO WARRANTY EXPRESSED OR IMPLIED.”
The latest commit in the repo [0] right now is "should work" (34ff295). Filtering by "bug" in the issue tracker gives:
Comma two freeze and reboot while engaged. I recently had an incident on the interstate where my comma two froze completely (while engaged) and rebooted. The video froze, Comma's steering torque turned off, then after about five seconds in this state, the device rebooted.
Zygote restarting while OP active. So for the past couple months, after a couple days of uptime, the comma two offroad UI will glitch out. The buttons respond with highlighting upon touch, but everything else stops working. ... This time, I left the comma two to bask in its glitched state and this ended up happening; the comma two had the spinning logo, while ALSO still driving my car. In the video below, I nudge the wheel to cause on purpose ping pong to
prove it was still steering.
Spontaneous disengagement/reboot. Cruising on expressway and OP spontaneously disengaged and the comma2 rebooted
Hard braking while following the lead car. Was following the lead car on a highway traffic jam, that car was going without lights so might be a reason. Braking was really hard when he stopped, almost hit him ) I had a
feeling that C2 don't see it at all.
What's more worrying is that Comma's response is often either a) declare it a hardware failure or b) basically a WONTFIX:
Comma support's response is to return/exchange the unit due to presumed hardware failure. It would be nice to know what exactly happened but I get you can't thoroughly investigate every anomaly. Folks at @commma feel free to close this issue.
@Torq_boi said that it is not a model bug, but old known problem with no time to brake as lead car accelerated and braked fast. (So could INDI tuning fix that problem?)
Closing this issue since it probably was hardware failure.
If it happens a lot it's usually a hardware failure. But try running openpilot release instead of dragonpilot before drawing any conclusions.
Comma.ai is trying to do big things and I hope they succeed. No reason self-driving technology should be bundled with a car and I have little faith in auto manufacturers to deliver.
Lane assist technology exists. Look at consumer reports for a comprehensive review [0] (comma.ai was #1 in lane assist, above even tesla). They are open about their mistakes, issues and tradeoffs, much more so than other companies. I don't think its right for engineers use this as a cudgel to beat them over the head.
> They are open about their mistakes, issues and tradeoffs, much more so than other companies. I don't think its right for engineers use this as a cudgel to beat them over the head.
This openness gives us consumers an opportunity to look under the hood and evaluate the tech for what it is, which is a good thing and should be applauded.
However.
I’ve used that freedom to look at their code and processes and I’ve decided that this is not software that I want my life to depend on. There are serious outstanding bugs that Comma doesn’t have the manpower to investigate fully, their default assumption seems to be that the hardware is faulty, their test suite is regularly failing, pair programming / code review is sometimes used and sometimes not, PRs are created and merged without a description or comment. That’s just scraping the surface, without going into things like whether or not Python is a good choice for this sort of thing or if there’s a bug in my car’s CAN implantation that will get triggered by this and end up killing me.
All in all, I trust Volvo more than I trust Comma.ai. Maybe that trust is misplaced? For what it’s worth I don’t really trust any level 2 self driving tech.
They're completely opaque. There's no issue board, can't even find a terms of service online. No over the air updates. Can't even find a technical manual or version number of the software you're running. I assume you just have some car salesman point you to the button to press. Independent reviews put open pilot much higher than Volvo despite volvo having much more reach and influence in the industry
Like I've said elsewhere, they have a very loyal user base, probably driven more miles than the others combined, lots of discussion online and are more open by a mile. They even store the raw video to train, not just feature vectors.
To me this is all signs that their product is relatively strong. And I'd be willing to bet their processes and standards are much better than big auto.
> And I'd be willing to bet their processes and standards are much better than big auto.
Comma has a neat product, but I really do not agree with comments like this. The auto industry is terrible, but they are miles ahead of Comma in the process realm.
The industry has spent a lot of time on safety standards like ASIL and AUTOSAR. They would never allow Comma to ship, because it is written in Python, a pausable language (absurd), and without any hardware protections like lockstep processors or watchdogs, if only because it is literally a consumer phone. A system connected to the steering and acceleration of a vehicle freezing while still in control is ridiculous.
Comma will kill someone; it is only a matter of when, not if.
This just isn't true as far as I am aware. The vehicle interfaces[0] and control loop are all in Python. It doesn't help if other components are in C if your commands to the vehicle are interrupted by GC pauses.
Even if it was true, and it was all in C, it is only better because there is no GC -- there are still a myriad of potential issues that the auto industry routinely addresses in both software and hardware that Comma either cannot or does not. Not to say they are perfect, but Comma is well below any sort of bar.
> No reason self-driving technology should be bundled with a car
It seems to me that there are many reasons it should be bundled, and I'll bet that in the long run all self-driving cars will be integrated systems. It's not a good place for inconsistent installations or a modding mentality--imagine multiplying Uber and Tesla's programs a thousandfold with fewer resources and less accountability.
I disagree. I think if there's a clean uniform interface in which a device can receive video and make commands it could work out. Consider the fact that no auto maker I've seen, apart from Tesla, has made an infotainment system equivalent to a first generation iPad. I know its trendy to hate on infotainment, but I just want to see a map w/ directions to where I'm going. They're a lost cause in my opinion, and the less they do on the software front, the better.
Automakers gave up on that and now include android and apple car play as a feature. So I imagine they'll eventually give up and outsource things like lane assist, which are considerably more difficult
Autonomy is going to require a lot more than a CAN bus & some video interfaces. The specific sensors matter. The framerates matter. The field of view & mount points matter. The specific compute platform matters -- and it certainly shouldn't run on an Android or iPhone without automotive-rated components, realtime guarantees, etc.
All of these things argue strongly for a wholistic vehicle design rather than an aftermarket afterthought.
That's fair. But I think for their current use case, lane keep assist, they make due with less. They're comfortable in level 2 self driving. For anything above that you're probably right, but I would love to see big auto outsource much of it a company dedicated to self-driving. Specialization is essential in these sensitive fields and automakers already rely on suppliers heavily. Very few can get away with the Apple model and even Apple relies on outside parts.
The funny thing is that w/ any other product we would scoff at fully integrated end to end solution for a product that doesn't exist. No product market fit, no minimum viable product, no incremental products.
Yet they do-the-tesla trick and present the product as fully SDV. Disingenuous to the point of fraud: "here's a car in public traffic, with nobody at the wheel, nudge nudge [small print: it's level 2 and you get to clean up all those nasty pieces, nothing to do with us at all, you're only supposed to run it in tightly controlled, isolated private environment]"
or, imagine if there's standard interfaces between the motors/control system and the "brains" of self driving.
Like how CDs and peripherals don't all need to come from the same manufacturer ('cept apple's!), because there's an industry standard for interoperability.
Apple music peripherals are a great example of what I'm talking about. They don't use bluetooth because W1 chips (their integrated solution) blow bluetooth out of the water. Bluetooth can still exist in other devices because people don't need low latency and high bitrates, but they definitely need their cars not to crash because a standard causes hardware and software from different vendors to run into interoperability quirks. There's no agreement on the best hardware or even the type of hardware for self-driving, so there will be a lot of those quirks.
The mistake seems to be "there's a standard [and it Works more than 80/20]." There are multiple to choose from, and in versions, and each has revisions, and now Foo v2.1 doesn't want to work with Foo 2.0, except on even Mondays, and don't get me started on Foo 3.x with Bar 1.5.x!
You're handwaving away all the inherent complexity, but shoveling it off into a box labeled "standardization" doesn't make it go away.
In other words, the interoperability is very much "try to swap these components, if they work, yay, if not, try swapping them for something else until you get a combination that works." And that's for non-critical components.
> No reason self-driving technology should be bundled with a car
Aftermarket self-driving is inherently a compromise.
OEMs can design the car for their self-driving system. They can place sensors in the right places, size the actuators appropriately, and integrate vehicle dynamics data into the system.
OEMs can also properly safely test the entire system, operating as a unit, as it would operate on the road.
OEMs also leverage economies of scale to make the best technologies available at reasonable prices. Automaker net profit is on the order of 10%, and their scale is massive. You're getting a lot of bang for your buck.
Aftermarket systems are inherently a compromise. They're limited to whatever sensors can be easily mounted in the aftermarket device. The device is only mounted where convenient. The device can't take advantage of vehicle-specific sensors or actuators or vehicle dynamics data because they need to keep it generic.
And most importantly, aftermarket self-driving device manufacturers obviously aren't safety testing the device in combination with every single car on the road. They expect their users to just sort of wing it and see how it goes.
I think it's cool that Comma is working on this, but it's not going to replace OEM solutions any time soon.
> Look at consumer reports for a comprehensive review [0] (comma.ai was #1 in lane assist, above even tesla).
The report shows Tesla's system was more capable and performed better. Comma tied with 3 other manufacturers for performance.
The advantage of the Comma system was supposedly in the fact that it keeps drivers the most engaged. Kudos to Comma for that, but it's not exactly accurate to say that Comma's lane assist performance beats Tesla.
I don't think comma.ai should be faulted for being open. I have trouble finding any statements on liability on any other lane assist technology. Would love to be proven wrong with an actual policy.
Volvo sells a car with an optional feature that has hardware and software components. If you can prove that the hardware or software is “unreasonably dangerous” in design or implementation, you may have standing to sue Volvo.
Comma sells only some hardware (with some limited software?). Comma suggests, but is careful not to encourage for street use, that customers modify the hardware with open source software that makes the hardware dramatically more useful. If the hardware functions perfectly but this software proves unreasonably dangerous, you do not likely have standing to sue Comma.
I think it’s really cool that late model cars are adequately unencumbered to make experimentation like Comma possible. Aside from the sketchy commercialization, Comma seems impressive and valuable. I have seen no evidence Comma is any less safe than the auto manufacturers’ LKA products.
My beef is that Comma-the-product imposes an externality on our roads by bringing experimental driver assistance software to the mass market that is not backed by the software product’s developers (or any other entity).
> If you can prove that the hardware or software is “unreasonably dangerous” in design or implementation, you may have standing to sue Volvo
I imagine this is true for comma as well. I don't think their little waiver and notice that it should only be used in a research setting would hold up in court. Much like "incense" were banned despite being marked "not for human consumption".
> My beef is that Comma-the-product imposes an externality on our roads by bringing experimental driver assistance software to the mass market that is not backed by the software product’s developers (or any other entity).
I think their product is safe and I'm glad it exists out there. They claim 35 million miles driven and I am yet to see one serious accident. So I think the risk is overstated.
People like Thiel often complain about the lack of innovation. This is real innovation and could transform society. I think we should applaud the people that have the audacity to tackle hard problems that can change people's lives. The only way you'll get there is real miles driven on real roads. As a society, we should not have a zero risk attitude, otherwise nothing would ever get built.
In the technical aspect you are correct. In the legal aspect, nope. Even if (for the sake of the example) this were identical technology under the hood w/ Volvo and Comma, there's a world of difference between "I bought a stock car that is certified to be street-legal" and "I installed some aftermarket thing into my car, despite being warned that it might void the car's street-legality."
It's not just what you think, not least because your driver's license is a permit, not a right. (On the other hand, I agree that the bar requested for computer driving safety is at least an order of magnitude higher than for human driving safety, and that the elephant in the room is the risky road environment that we pretend doesn't exist for humans.)
That essentially boils down to the color of the bits. In other words, what git log says is a different bit color (technical) from what the EULA (legal) says, and you can't meaningfully use the legal-colored bits in a technical context and vice versa. In other words, if it breaks, you get to keep all the nasty pieces, not the company, because the EULA says so.
>THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT.
It is weird that one of the only things I see above the fold on the company's home page is a "Buy Now" button considering they don't actually sell "a product".
I think their distinction is that they’re selling the hardware, which is capable of controlling the car just fine. So the thing you’re paying for is delivering as promised. The software is a separate project, and you could theoretically load whatever software on the hardware you wanted. So the fact that the software is glitchy is not (in the view of the company) something you can hold them responsible for. You paid for hardware, you got hardware. What you do with it is up to you.
This is at least what I remember from a years old Wired article when the comma one was being developed.
Whether that will actually hold up in court is TBD, considering how closely coupled the software is to the company and hardware.
Sure. You can load any software you want on that, so it's actually for playing DOOM on your car's entertainment system, and it has absolutely no relation to the demos on their pages.
IANAL, but "the dog ate my steering wheel" would be a more plausible defense.
It’s because the standard product doesn’t have autopilot, they sell a driver assistance tool. The tool they sell does not have autopilot.
The device is open, and you can flash with their open source code from GitHub to give you hacky autopilot. This is how they get around the legal issue.
It’s like a “we sell you a legal product. We advise you don’t flash this code on it which we are hosting on GitHub wink wink”
Reminds me of university. "No officer, we weren't selling tickets to the keg, we'd need a license to sell booze. We're only selling cups for $5, the beer is free!"
This appears to be the strategy. In the U.K. they used to sell legal highs in tablets marked as “do not eat - plant food” since technically they weren’t safe for human consumption. That’s effectively the same too!
> Yeah - I don't think this would hold up, you can't really have it both ways.
I don't think they would have any legal problems due to this. They sell it but clearly label it as experimental, for research only, and urge buyers to comply with local regulation. And the law pretty much everywhere states that the driver is responsible for driving the car and for the outcomes of any modification brought to the car that was not pass homologation.
Tesla is a real example that passed this test. Their marketing language brands AP as "fully self driving, some features unavailable due to local laws". The "wink wink" may be obvious for the buyer but not in the eyes of the law. Letting the car drive itself is the driver's failure, not Tesla's. Tesla can at most be held responsible for misleading advertisement and ordered not to use specific language (as it actually happened).
> "Tesla is a real example that passed this test. Their marketing language brands AP as "fully self driving, some features unavailable due to local laws"."
This isn't true, FSD has always been a 'coming soon' feature you can prepay for distinct from autopilot. Autopilot has always been advanced lane assist. "Autopilot" in planes just holds the same flight pattern and doesn't really do anything sophisticated, autopilot in Tesla is similar.
German authorities did find Tesla's claims about the AP as misleading [0]. Tesla's AP page was updated (globally) to reflect "full self driving in the future" but until recently it just said "Tesla cars come equipped with all the necessary hardware for full self driving" with a footer claiming some features are unavailable due to local laws.
Regardless, Tesla was not found guilty for anything other than misleading advertisement, not for failing to build a car that drives itself. I don't see how comma.ai could be held to a higher standard.
> The court, in Munich, said: "By using the term 'autopilot' and other wording, the defendant suggests that their vehicles are technically able to drive completely autonomously." [0]
I don't agree, but I know this is a position a reasonable person could hold.
I just don't think autopilot means autonomous driving and I think that's clear to people.
If someone thinks cruise control means you can crawl into the backseat of the car, is that the fault of the car manufacturer?
I also think Germany may have a bias given their own car industry.
What comma.ai is doing I think is categorically different.
I'm okay with people hacking on their own stuff, I just don't like the "This is Alpha don't use it" when they obviously intend you to buy and use it that way.
> I also think Germany may have a bias given their own car industry.
This isn't really a fair argument to be honest. You're brushing the court's justification aside ("By using the term 'autopilot' and other wording, the defendant suggests that their vehicles are technically able to drive completely autonomously.") to focus on a weak link between the country having a strong auto industry, and the justice system banning advertisement for something Tesla does not actually deliver.
> I just don't think autopilot means autonomous driving and I think that's clear to people.
Tesla was claiming their cars have "all the necessary hardware for FSD" since at least 2016 [0]. That's an obviously misleading statement since not even expert engineers know if that hardware is enough. If anything the general consensus is that it isn't, and Musk's missed promises support this.
> If someone thinks...
When it comes to misleading advertisement the technical definition isn't very relevant. As the name suggests, it's about whether enough people are mislead into believing they're buying something else. This was raised by consumer groups after realizing the payed promise of FSD never came. Customers shouldn't be expected to be experts in all things. So if the marketing makes it sound to regular people like the product is something other than what it actually is then it's fair to call it "misleading".
> This comes across as them selling a product they know could fail in dangerous ways, but they don't want to be responsible for any of it.
This is just a safety precaution. Why wouldn't they put this in there? It may not hold up in court but it can't hurt. I don't think this means they "know it could fail in dangerous ways".
The safety model in comma.ai is actually quite brilliant. It can't perform any action faster than you're able to correct and disengage. To test it, they have someone drive while a malicious passenger seat has full access of the controls as limited to by the software. The passenger then messes with the steering and acceleration without the main driver's knowledge. The driver has to prevent the actions. The torque limit is much lower than that of Tesla or other lane-keep assist tools.
If you sell someone something with a nudge-nudge, wink-wink, and they get killed using it, it absolutely hurts. You may be able to weasel out of being held accountable for it, in which case it won't hurt you, but the larger issue here is that this kind of misleading copy can lead to people making poor decisions.
You may have put it in the fine print that it's not a real product, but the whole point of nudge-nudge wink-wink is to strongly imply that it's a real product worth real money, and thus you are going out of your way to encourage people to try it and take chances with real lives.
If I buy a cell phone holder for my car, and it distracts me and I get into an accident? What if Car Play lags and i'm distracted and I get into an accident? What if radio plays an ambulance and I freak out and get into an accident? What if my sunglasses make me mistake a red light?
This product does lane assist. It does a good job according to consumer reports [0], higher than all other lane assists. It doesn't detect stop signs or traffic lights or drive for you. It keeps your lane. It acts predictably and gives the driver enough time to react.
Unfortunately the liability model is messed up. I think this product is relatively tame and should allow to exist. And you need to pay attention. They even have inward facing cameras to make sure you're paying attention, more than most other companies. They do everything they can to be safe but of course they're not stupid and they'll put in a sweeping statement on liability.
This is really pushing forward the self-driving industry and is an incredible feat of engineering. It's much more open and transparent than every other lane keeping software, and it's being developed with a lot of thought and care from a talented engineer as opposed to some nameless faceless bureaucratic commission in Ford or some other dinosaur.
I'm not gonna debate the "appropriate level of liability."
My point has to do with what you're signalling. If a thing is alpha-level, and real humans can get killed, I wouldn't let random people buy it and use it in their cars, period.
Informed consent is deeply problematic for a product like this: Very few people have the expertise to look at the code and the hardware and properly evaluate the risks, right down to understanding which kinds of edge cases need to be very carefully avoided.
Unless you're vetting researchers and barring people who just want to save a few bucks and brag their car self-drives, you really don't know if every person who downloads the extra software really does grasp the implications of what they're consenting to.
You might grasp the implications, and so might many people in this thread, but that doesn't guarantee that everyone does. THE AUDIENCE OF HACKER NEWS IS NOT A REPRESENTATIVE SAMPLE OF SOCIETY.
And we are talking about a product to be used on open roads: In addition to informed consent from the person who downloads the software, if they get into an accident with another vehicle, pedestrian, or cyclist, did any of those people consent to share the road with someone who installed alpha software on their device?
Morally, I can't get behind a few disclaimers and a nudge-nudge, wink-wink for any kind of autonomous driving tech, even if it's "just" lane-keeping.
———
Update: But to be clear, I am in favour of people tinkering with all sorts of digital automotive tech, and we really should find a way for lone inventors or small teams to innovate without the "enterprise outfits" using regulatory capture to drown small competitors with red tape.
I'm only arguing in favour of truly informed consent, which I believe is tricky for driver assistance technology being provided to arbitrary customers.
So your main problem is about the disclaimer and that its called alpha. I provided a source that rates it the best product among all other competitors and the highest score on keep driver engage. And they have the most miles of any other lane assist technology. So I think its safe. I think the alpha is more tongue in cheek and is not a term that means anything really apart from, as you say, a wink and a nod.
For the laymen user, they won't read the disclaimer or understand what Alpha means or even know that is is "alpha". I'm an engineer and I probably won't ever really audit the code. I will do my research like most other people, read online reviews or testimonials like Consumer Reports.
So are you against all lane assist technology? How about auto-braking? Anti lock breaks?
I'm against just heaving that technology out over the fence into the hands of consumers and leaving it up to consumer reports and/or individual consumers to decide if it's safe enough.
Safety is a 'picking up nickels off of railway tracks" problem. A thing might work 10,000 times in a row, but then suddenly, catastrophically fail because something is different that hadn't been tested before, like dealing with a woman walking her bike across a multi-lane road.
this is not a good scenario to leave up to consumers to decide whether a thing is safe. Not even with consumer reports to help out with testing.
Now as to ABS, the comparison is not even close. I do not buy ABS by purchasing brakes and then flashing some ROM with code I download from the internet. ABS is covered by all sorts of regulatory frameworks around the world, it isn't simply cooked up and offerred for download like it's an MP3 player skin.
Even though it's a much more mature technology, the problem with ABS is again, consumers cannot give informed consent to a disclaimer when purchasing it from some random person.
When I buy it as part of an automobile from a manufacturer that complies (I'm looking at you, VW) with regulations, I'm consenting to trusting something in a completely different way than when I download code and there's an MIT license or whatever weasel-wording somebody em ploys to say, "If you die, sucks to be you. If you kill someone, it's your soul that will be in torment."
Your equivocation of 1. downloading code for a safety feature from the internet that's marked "alpha" and has been tested according to whatever the author feels like testing because it's not offered as a "product," with 2. purchasing an automobile that has ABS brakes which are tested and maintained within a global safety regulatory framework...
You're entitled to whatever workdview you like, but on this pointI believe our discussion ends. There is a fundamental axiomatic belief I hold that is not compatibvle with a fiundamental axiomatic belief you hold.
I don't want to spend all day trying to explain why I believe Volvo selling a three-point harness is not the same as some random person knitting a seta -belt, selling it on etsy, and leaving it up to you and I to read the consumer reviews to decide whether it's safe enough.
You believe the free market plus informed consumers will sort all this out. I do not.
Please don't doxx Ford engineers if you don't give any proof. There are hard working, ethical people working who don't want to kill people by lightheartedly pushing stuff on the road. Just because you don't know them does not mean they are not talented.
So no guarantees whatsoever then. Because you are always responsible and are always expected to recover from anything the autopilot might ever come up with.
Teslas do fail in deadly ways. Everyone that cares to look knows this. Yet Tesla is fine with it, even while knowing that humans can't reason about safety when the car drives perfectly the other 99% of times.
> Because you are always responsible and are always expected to recover from anything the autopilot might ever come up with.
That's always been the case for any driving assistance systems that automakers offer, AFAIK. Do you object to the state of driving assistance in general or just how Tesla implements it?
Well driving assistance are mostly about assistance.
While Tesla allows for and gives the impression of it doing more than that, when it can't. You are expected to react within a split second at any time. Actually just driving the car is a way simpler task than supervising someone else that has unintuitive blind spots.
Something that google discovered early on and anyone that thinks about it realize. A car that mostly drives itself is way more dangerous than a car without any assistance at all.
"No ADAS system currently on the market has safety guarantees on perception or planning algorithms.
So, what must be guaranteed is the ability of the driver to easily regain full control of the vehicle at any time. In openpilot, this is done through the satisfaction of the 2 main safety principles that a Level 2 driver assistance system must have:
1. The driver must always be capable to immediately re-take manual control of the vehicle, by stepping on either pedal or by pressing the cancel button;
2.The vehicle must not alter its trajectory too quickly for the driver to safely react. This means that while the system is engaged, the actuators are constrained to operate within reasonable limits."
They don’t guarantee it they just provide a disclaimer. There’s plenty of driver monitoring solutions they could provide but don’t. Combine that with deeply unethical promises of full self driving coming just around the corner, feature complete by 2020, fully autonomous road trip by 2017, and you’re left with a dangerous product and customers that overestimate its abilities.
I'm pretty sure the disclaimer does nothing. If they put out a product they need to expect people will use it, and they need to take actions to keep those people safe.
...and woe betide you if you try to get your insurance company to pay up, ever again. "Aftermarket? Cool, a get-out-of-reimbursement-free card, have a nice day!"
Sure, I'm all for that. OTOH, my experiences with insurance companies is that they'll spend 10x the money and effort for denying payment, and the contracts are heavily weighted in their favor.
I think its just them trying to fend off those people that are looking for anything to sue companies. Telsa gets these lawsuits all the time, but they have a bunch of lawyers to deal with it.
The whole point is they're selling you some hardware only (a modified Android), and it's legal if you yourself modify your own car or something. You have to manually install the software and mount it physically after buying this.
They are selling this product as a dashcam for obvious reasons. The autopilot feature is an experimental feature that you have to enable yourself on your own risk.
It's for 1/10th of the price, so twice better then? Just kidding, my point is that Tesla is selling it in about the same terms. Beta software for extra money, no guarantees.
Not sure that is true, you are comparing apples to oranges. Beta on Tesla is for FSD.
Autopilot = lane keep on the highway, it's as mature as lane keep is on Toyota or Honda. It's also included for no additional cost in all tesla, so it's actually 100% cheaper.
Most software developers have mostly operated in largely unregulated domains, so there's a MISunderstanding of how manufacturer responsibility works in industries like automotive. Saying "I AM NOT RESPONSIBLE FOR ANYTHING" in the automotive software space is the product liability equivalent of Michael Scott screaming "I DECLARE BANKRUPTCY" in The Office.
Well, their legal technique is a little more nuanced.
They are selling a product which is a legal and legitimate driver assistance tool which does not have autopilot.
You, as a user, can then modify the device by flashing unregulated code onto it to give it autopilot code, which is not advised by comma.ai *wink wink*
On public roads. We don't know how many they've done if you include closed track testing, but I'm willing to bet it's at least as many as they've done on public roads.
The real point is that Google isn't selling their system to consumers.
I remember the founder interviewed me to be CEO, when he hit the investment and publicly insulted Papa Elon for kudos and bad assness. The guy was a jerk on the phone and 10 mins in I told
Him to piss off and thought to myself “wow - who would work with this guy” he’s been at it since 2016 so glad it didn’t flop, but looks like he ate his words to Papa Musk.
it's pretty cleared being sold as a devkit. would you buy a PS5 devkit and expect it to be exactly the same as the retail PS5? I don't understand the issue here
—
“Any user of this software shall indemnify and hold harmless comma.ai, Inc. and its directors, officers, employees, agents, stockholders, affiliates, subcontractors and customers from and against all allegations, claims, actions, suits, demands, damages, liabilities, obligations, losses, settlements, judgments, costs and expenses (including without limitation attorneys’ fees and costs) which arise out of, relate to or result from any use of this software by user. THIS IS ALPHA QUALITY SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. YOU ARE RESPONSIBLE FOR COMPLYING WITH LOCAL LAWS AND REGULATIONS. NO WARRANTY EXPRESSED OR IMPLIED.”
SOURCE: https://github.com/commaai/openpilot/blob/devel/README.md#su...
—
EDIT: Here is the terms of use too, which appears to align to the prior legal clause above:
https://my.comma.ai/terms