Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There isn't necessarily in humans either, but why build machines that just perpetuate human flaws: Would we want calculators that miscalculate a lot or cars that cannot be faster than humans?


What exactly do you imagine is the alternative ? To build generally intelligent machines without flaws ? Where does that exist ? In...ah that's right. It doesn't except in our fiction and in our imaginations.

And it's not for a lack of trying. Logic cannot even handle Narrow Intelligence that deals with parsing the real world (Speech/Image Recognition, Classification, Detection etc). But those are flawed and mis-predict so why build them ? Because they are immensely useful, flaws or no.


Why should there not be, for example, reasoning machines - do we know there is no universal method for reasoning?

Having deeply flawed machines in the sense that they perform their tasks regularly poorly seems like an odd choice to pursue.


What is a reasoning machine though ? And why is there an assumption that one can exist without flaws? It's not like any of the natural examples exist this way. How would you even navigate the real world without the flexibility to make mistakes ? I'm not saying people shouldn't try but you need to be practical. I'll take the General Intelligence with flaws over the fictional one without any day.

>Having deeply flawed machines in the sense that they perform their tasks regularly poorly seems like an odd choice to pursue.

State of the art ANNs are generally mostly right though. Even LLMs are mostly right, that's why hallucinations are particularly annoying.


Not my usage experience with LLMs. But that aside, poorly performing general intelligence might just not be very valuable compared to highly performing narrow or even zero intelligence.


Well LLMs are very useful and valuable to me and many others today so it's not really a hypothetical future. I'm very glad they exist and there's no narrow intelligence available that is a sufficient substitute.


Not disputing that, but still think as far as reasoning or thinking machines are concerned it is a dead end.


I see. Well as far as I'm concerned, they already reason with the standards we apply to ourselves.

People do seem to have higher standards for machines but you can't eat your cake and have it. You can't call what you do reasoning and turn around and call the same thing something else because of preconceived notions of what "true" reasoning should be.


suppose there was a system that only told the truth. Then that system would seemingly lie because, for any complicated enough system, there are true statements that cannot be justified.

That is to say, to our best knowledge humans have no purely logical way of knowing truth ourselves. Human truth seems intrinsically connected to humanity and lived experience with logic being a minor offshoot




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: