Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem that almost all current AI research has is that nobody knows if your model is detecting the obstacle or the absence of a road.

The cliché example is when people tried to build a tank classifier. Since they took the pictures of the different tank types on different days, the AI learned to detect the weather conditions instead.

So to determine if your AI is detecting the roads absence, you would first need a lot of testing images that look realistic in general, but with absent road. As far as I know, such a dataset does not publicly exist yet.

So yes, that commenters approach seems new to me.



Computer can draw what it sees at windshield, for driver, by tracing position of driver head, to position projected image correctly. E.g. draw red outline around obstacles, draw green outline for clear road, draw predicted trajectories in orange, draw selected path in blue. Human will see what computers sees, so it will be able to react quickly.


Do you have a link to that tank/weather study? I've never heard of it, sounds really interesting!


It may or may not have happened with tanks; it sure happened with horses:

To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.

In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”

https://www.theguardian.com/science/2017/nov/05/computer-say...

There is, in general, a great deal of work on explaining the decisions of neural net. Explainable AI is a thing, with much funding and research activity and there's books and papers etc, e.g. https://link.springer.com/book/10.1007/978-3-030-28954-6.

And all this is becaue, quite regardless of whether that tank story is real or not, figuring out what a neural network has actually learned is very, very difficult.

One might even say that it is completely, er, irrelevant, whether the tank story really happened or not, because it certainly captures the reality of working with neural networks very precisely.


And as a less dataset-quirk-specific example of AI fixating on backgrounds, here's Microsoft Azure dreaming of electric sheep: https://aiweirdness.com/post/171451900302/do-neural-nets-dre...


Nice, thanks :)

Incidentally, (human) kids should never be allowed to hug sheep or goats like that. They can easily catch something nasty (enterotoxic E. coli, mostly). See e.g.:

https://www.bbc.co.uk/news/uk-england-lancashire-35039878

Juliette Martin, of Clitheroe, took her daughter Annabelle, 7, to the 'Lambing Live' event at Easter last year.

The youngster, who had bottle-fed a lamb, suffered kidney failure and needed three operations, three blood transfusions and 11 days of dialysis.


What wonderful examples, thank you for sharing :)


It doesn't seem to have happened. Gwern has done some extensive research: https://www.gwern.net/Tanks


As I say above, even if the tank story is apocryphal, it captures the tendency of neural nets (modern or ancient, doesn't matter) to overfit to irrelevant details (which, btw, is what Layerwise Relevance Propagation from my comment above is trying to determine).

This is probably the reason why this story has been repeated so many times (and with so many variations): beause it rings true to anyone who has ever trained a neural net, or interacted with a neural net for any significant amount of time. Unfortunately, the article you cite chooses to suggest otherwise.

In any case, if the tank story is an urban legend it has its roots firmly in reality.


I understand that the scenario rings true, but Plyphon_ specifically asked for "a link to that tank/weather study", so the fact that it doesn't seem to exist is of primary importance.


From the tone of Plyphon_'s comment it seems to me pretty obvious that they had at least read the article you link above and the point of the comment was not to actually get a source for OP's comment, but to score some kind of internet burn points. Judging from the earlier greying-out of Plyphon_'s comment I'm not the only person who thought so. I don't think that kind of comment should be validated with an actual response. It is borderline snark and certainly does not contribute anything to the conversation.

I'm happy to accept my mistake if I have misunderstood Plyphon_'s comment.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: