Reading this article seems outdated and therefore quaint in some areas now, the “we’ve all felt that moment of staring at a small bit of simple code that can’t possibly be failing and yet it does” - I so rarely experience this anymore as I’d have an LLM take a look and they tend to find these sort of “stupid” bugs very quickly. But my earlier days were full of these issues so it’s almost nostalgic for me now. Bugs nowadays are far more insidious when they crop up.
I’m not going to take bitter advice from someone who either hasn’t used them in a long time, or is terribly bad at using them. Especially as it seems like you hate them so much.
I don’t particularly like them or dislike them, they’re just tools. But saying they never work for bug fixing is just ridiculous. Feels more like you just wanted an excuse to get on your soapbox.
It's not that they can't fix bugs at all, but I find that if I've already attempted to debug something and hit a wall, they're rarely able to help further.
Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).
Objecting to this on some kind of philosophical grounds of "being able to generalize from existing patterns isn't the same as thinking" feels like a distinction without a difference. If LLMs were better at solving complex problems I would absolutely describe what they're doing as "thinking". They just aren't, in practice.
> Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).
"Seem". "Feel". That's the anthropomorphisation at work again.
These chatbots are called Large Language Models for a reason. Language is mere text, not thought.
If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.
> "Seem". "Feel". That's the anthropomorphisation at work again.
Those are descriptions of my thoughts. So no, not anthropomorphisation, unless you think I'm a bot.
> These chatbots are called Large Language Models for a reason. Language is mere text, not thought. If their sellers could get away with calling them Large Thought Models, they would. They can't, because these chatbots do not think.
They use the term "thinking" all the time.
----
I'm more than willing to listen to an argument that what LLMs are doing should not be considered thought, but "it doesn't have 'thought' in the name" ain't it.
> Those are descriptions of my thoughts. So no, not anthropomorphisation
The result of anthromorphisation. When we treat a machine as a machine, we less need to understand it by seems and feel.
> They use the term "thinking" all the time.
I find not. E.g. ChatGPT:
Short answer? Not like you do.
Longer, honest version: I don’t think in the human sense—no consciousness, no inner voice, no feelings, no awareness. I don’t wake up with ideas or sit there wondering about stuff. What I do have is the ability to recognize patterns in language and use them to generate responses that look like thinking.