Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If DALL-E had a choice to output "Command not understood", maybe we wouldn't be discussing this.

Like those AIs that guess what you draw, and recognize random doodling as "clouds", DALL-E is probably using the least unlikely route. That a gibberish word is drawn as a bird is maybe because it was "bird (2%), goat (1%), radish (1%)".

1. https://quickdraw.withgoogle.com



That's extremely optimisic. When faced with gibberish, the "confidences" are routinely 90%+ as with "meaningful" input.

It's almost as-if its an illusion designed to fool, we, the users.. by only providing inputs meaningful to us, we come to the foolish idea that it understands these inputs.


This is a good point. The fact that DALL-E will try to render something, no matter how meaningless the input, is a trait it has in common with many neural networks. If you want to use them for actual work, they should be able to fail rather than freestyle.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: