I have the same overall reaction. But suspect that your analogy with the calculator to distract some people reading this. Difference being that one is user input error and the other is either ChatGPT misunderstanding what's being asked or just lacking training data and presenting an answer that's incorrect.
By yes, an eye roll from me as well. A few months back I heard the horror stories about how a bot answers with confidence and now it's the main complaint in articles about why it's busted and dangerous. It doesn't bring anything new to the table and doesn't push the conversation forward in any way.
> misunderstanding what's being asked or just lacking training data and presenting an answer that's incorrect.
I suppose I just don't think we humans are so different. In fact we often lack training data and certainly lack the ability to iterate quickly. In the case of the modern calculator we have the benefit of all the training data necessary to design the system properly, but at its initial inception not so much. As more "training data" or experience with circuit design and applied mathematics the returned output of the calculator improved.
Maybe my expansion of the analogy is off or too esoteric.
By yes, an eye roll from me as well. A few months back I heard the horror stories about how a bot answers with confidence and now it's the main complaint in articles about why it's busted and dangerous. It doesn't bring anything new to the table and doesn't push the conversation forward in any way.