I feel like this article makes the same tired point I see every time a new technology comes alone: "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"
Yeah. And that's OK. Because nobody will need to shoe horses any more!
If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".
> I feel like this article makes the same tired point I see every time a new technology comes
I sympathize with this viewpoint, but I do think it’s important to recognize the differences here. One thing I’ve noticed from the vibe-code coalition is a push towards offloading _cognition_. I think this is a novel distinction from industrial innovations that more or less optimize manual labor.
You could argue that moving from assembly to python is a form of cognition offloading, but it’s not quite the same in my eyes. You are still actively engaged in thinking for extended periods of time.
With agentic code bots, active thinking just isn’t the vibe. I’m not so sure that style of half-engaged work has positive outcomes for mental health and personal development (if that’s one of your goals).
The big problem is that LLMs not only replace "shoeing your horse" or some other singular task.
If you let them they can replace every critical thought or every mental effort you throw at them.
Often in a "good enough" (or convincing enough) way
Especially for learners this is very bad because they will never learn how to come to any proper thought process on their own.
How should they be able to check the output?
We basically are training prompt engineers without post validation now.
You are both right. You won't need the skills. The problem is that you will want to eat and have housing.
Maybe we get UBI. But if Jeff Bezos and some friends own all of the production. What would they do with your UBI dollars? Where can he spend them? He can make his own yachts and soldiers.
> "but if we don't know how to shoe our own horses any more because we got cars, soon nobody will know how to shoe horses!"
No, this would be more akin to saying, "if we don't know how to change our car's oil anymore because we have a robot that does it, soon nobody will know, while still being reliant on our cars."
For your analogy to work, we would have to be moving away from code entirely, as we moved away from horses.
> It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
Except that once you forget, you now would have to re-learn it, and that includes potentially re-learning all the pitfalls and edge cases that aren't part of standard training manuals. And you won't be able to ask someone else, because now they all don't know either.
tl;dr coding is a key job function of software developers. Not knowing how to do any key part of your job without relying on an intermediary tool, is a very bad thing. This already happens too much, and AI is just firing the trend into the stratosphere.
> we don't know how to change our car's oil anymore because we have a robot that does it
OK, are we worried that all the robots will somehow disappear? Why would I have to change my own oil, ever, if the robot did it as well as I did? If it doesn't do it as well as I did, I'm still doing it myself.
> OK, are we worried that all the robots will somehow disappear?
No, you should be worried that you (or devs who come later, who never had to 'change the oil'/ write tests themselves) won't know if the robot has done it right or not, because if you can't do the work, you can't validate the work.
And the robot isn't an employee, it's just a tool you the employee use, so when you are asked whether the test was coded correctly, all you'd be able to say with your 'atrophied' test-writing skills is, "I think so, the AI did it". See the issue now?
> If it doesn't do it as well as I did, I'm still doing it myself.
I thought your unnecessary skill atrophied and was forgotten ? How are you going to do it yourself? How do you know you're still as good at it as you once were?
No. I use a calculator, and I don't have to second-guess it. It just works. If it didn't work reliably, I would use it.
> I thought your unnecessary skill atrophied and was forgotten ?
Again, either I didn't need to do it because the AI did it well, and I forgot the skill, or it never did it well, I always did it myself, and I never forgot it. I don't understand why you're assuming that the AI will do it well enough at first that I'll forget how it's done, and that it will then somehow get bad at it so I'll have to start doing it myself again.
A pro baseball player can tell if someone is throwing a baseball well by watching them do it. Baseball training camps like pitching workshops often bring in pro players to coach new players on technical points of pitching.
If those pro players go 10 years without ever pitching a ball, they'll still know all the rote technical details on an academic/ theoretical level, but their actual pitching ability will have diminished, because skills are perishable. Coding is a skill.
Observing someone else code is not practice for coding yourself, and will not maintain (nevermind improve on) your coding skills.
How many times per year do you hear people saying something so obviously flawed that you first wonder how they feed themselves, only to realize with horror that no one else seems to notice and instead is transmitting and discussing the idea as it was some grand insight they’d just never thought about it that way?
Critical thinking skills are skills that I think are incredibly useful and, if never taught or allowed to atrophy you can get, well, this...<gesturing dejectedly around with my cane>
People not understanding basic science, arithmetic, and reading comprehension at what was once an 8th grade level, can be easily tricked in everyday situations on matters that harm themselves, their family, and their community. If we dig deeply enough, I bet we could find some examples on this theme in modern times.
Had to drop in here to call you out for completely missing the point of the article.
The whole point is that these LLMs build up a dependence because your critical thinking skills need not apply when a data center can average together passable answer for you.
Again, as I've said many times in the comments here, if you need something, you'll keep it. If you don't keep it, it means you don't need it. If your critical skills atrophy, it's because you never need to use them, and if you never need to use them, what's the problem?
I agree with this, it reminds me of how most people don't need to write assembly anymore, but it still helps with certain projects to have that understanding of what's going on.
So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.
Indeed it's not a new phenomenon, so why are we fretting about it? The people who were going to understand (assembly|any code) will understand it, and go on to build great things, and everyone else will do what we've always done.
Yeah. And that's OK. Because nobody will need to shoe horses any more!
If I forget how to write tests, what's the problem? It means that I never need to write tests any more. If I do need to write tests, for some reason (maybe because the LLM is bad at it) then I won't forget how to!
That's how atrophy works: the skills that atrophy are, by definition, the ones you no longer need at all, and this argument does this sleight of hand where it goes "don't let the skills you don't need atrophy, because you need them!".
Well, do I need them, or not?