Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How ChatGPT made me lazy (newbeelearn.com)
35 points by pdyc on Feb 20, 2024 | hide | past | favorite | 59 comments


I'm an experienced programmer currently mentoring an friend who wants to branch out into software development.

We agreed the following experiment would be interesting to both of us: We're sharing a ChatGPT Plus subscription, and I'm allowed to read the conversations he has with the model related to his learning projects. He's using it for general tech questions, but also for code analysis and code generation, bug-finding and so on.

It's been a mixed bag. On some level, his progress is faster and his productivity is higher than it would have been without the AI assistance. OTOH, the cost to this progress not having been hard-earned is pretty high, too: He takes a lot of AI-generated boilerplate for granted now without understanding it or the concepts behind it, so when the AI gets it wrong or forgets it, he is unable to notice what's missing. He also gets stumped/stuck often where he shouldn't - technically he's aware of all the constituent parts of the solution he needs, but he can't integrate the knowledge. Often he doesn't even try and just heads to ChatGPT, which can't help him, often because he doesn't know how to phrase the question correctly.

There's a lot of value to having done the legwork and having fought for every line of code and little bit of a solution that gets skipped over here in this style of the skill acquisition.

Edit: A few more details in later comment.


This reminds me a bit of the some of the problem solving experiments done comparing pet dogs to undomesticated wolves. The experiment wasn't overly complicated, a piece of meat was placed in a locked cage and the animals were given free reign to try their paws at getting to the food. The wolves were persistent, some employed tools like sticks to push the food out, some managed to navigate the clasping mechanism on the cage. The domesticated dogs more often than not quickly gave up and looked to the human present for help.


Think you're referring to Frank & Frank 1985. It's pretty neat in terms of the tests and responses. Lots of springs, levers, and other puzzle boxes with the wolves almost always eventually solving the puzzles unless the wolf was nervous or agitated by the situation for some reason.

"The wolves generally attacked each puzzle immediately upon release from the start box and persisted until either the problem was solved or time had run out. In contrast, the malamutes investigated puzzle boxes only until they discovered that the food was not easily accessible, after which they typically returned to the start box and performed a variety of solicitation and begging gestures toward Experimenter 1."

There's a couple others that have followed on that are kind of neat to and related. Marshall-Pescini, et al. looked at wolves and dogs ability to play shell games, recognize hidden food choices, and rationalize about whether risky choices that don't pay out are better than somewhat not preferable food pellets. Part of the result from that test was that dogs may just not care as much as wolves. That wolves with their diet and carnivore nature, have a much stronger preference toward what researchers believe is the preferable choice. Yet, from the wild dog perspective, they took a long time to have any testing preference for meat vs pellets, compared to the wolves immediate preference. [2]

Which is actually vaguely related to the topic article. You get a mediocre food pellet, but it solves the task, so you don't really care that much and move on. The "quality" of the food pellet in modern human existence has limited bearing.

[1] Frank & Frank 1985, "Comparative Manipulation Test Performance in Ten Week Old Wolves and Malamutes", https://www.researchgate.net/profile/Harry-Frank-2/publicati...

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4993792/


Some of the dog breeds, most notably French Bulldogs, have been specifically bred for qualities that enhance their interaction and dependency on humans (like having a unique set of facial features and expressions that many people find irresistibly cute and compelling).


> bred for qualities that enhance their interaction and dependency on humans

That's basically the whole idea behind dogs.


Sounds like dogs found another way to solve their problems ie get humans to do it for them. It's a pretty good solution as long as there is a sympathetic human around. Of course, they would be SOL if they ever find themselves alone out in the wild.


I'm not familiar with the study, I wonder if they also did the test with no human around. Anecdotally, my dogs can be pretty lazy about getting into things when I'm around, but seem to get creative when I'm out.


> OTOH, the cost to this progress not having been earned is pretty high, too: He takes a lot of AI-generated boilerplate for granted now without understanding it or the concepts behind it, so when the AI gets it wrong or forgets it, he is unable to notice what's missing.

How materially different is this from "copy-pasted boilerplate from an example on a website that isn't fully understood"?

I've personally found one of the biggest advantages for learning a new stack with ChatGPT is being able to say "hey, how do I modify this boilerplate for [specific piece of functionality]" or "hey, I have this code and I'm getting this error, what should I try" vs just trying to find websites with other examples of slightly-different boilerplate or trying to start from square 1 (which would often mean dedicating days or weeks to less-immediately-relevant tutorial foundation projects).


> How materially different is this from "copy-pasted boilerplate from an example on a website that isn't fully understood"?

When finding and copying, you have to employ at least some degree of critical thinking. The result is rarely the first on the search results, and usually cannot be used without some adaptation. Generated solutions usually require less adaptation.


> When finding and copying, you have to employ at least some degree of critical thinking.

You and I have workers with vastly different frontend "engineers" in that case, especially in web agency shops where "faster implemented === better" in most cases.


Haha. But taking the statement seriously: Do you think the engineers in question employ more of their critical thinking capacity when using code from a generator than when adapting existing snippets?


>> He takes a lot of AI-generated boilerplate for granted now without understanding it or the concepts behind it i have seen this too in junior programmers. It is indeed bad but than i think of myself without internet and in some ways i too become handicapped like them unable to make meaningful progress without access to documentation so its a mixed bag. Good programmers will learn to identify gaps and filling them so that they can better utilize the tools others would be left behind.

For senior programmers its an absolute productivity boost.


Yeah, agreed - if you know what you want, and can write the reqs to coax it out of the model, it's a great typing accelerator.


> it's a great typing accelerator

It's great at this, and I use it like this a lot.

Like I know how to write the code, and exactly how I want it to be, but I faster describe that in words and let GPT4 write it out like I want it to, than to write it out myself. Even in the cases when the code isn't even boilerplate.

Sometimes the system prompts need a bit of tuning, but time spent on that tends to even out after some usage.


That's really interesting.

You guys should write an article about it covering both perspectives.

Recently my ex-employer said he fired a junior dev because he was relying too much on paid ChatGPT, without understanding the concepts.


This is exactly why using GPT to supplement but not supplant writing code is very good. Code generation is generally more helpful as you go to higher layers of abstraction as long as you can stay at that layer of abstraction. Once you need to jump a level lower because your abstraction fails in a specific context you need pre-existing knowledge of how the abstraction works in order to ask GPT the right questions.

Hence, depending on the domain knowledge of the piece of code you are dealing with, GPT can be very helpful in generating a good scaffolding to get off the ground quickly (i.e you want to write a web app but don't want to deal with having to learn how to write a whole react app etc.), but asking GPT to...write an optimizer for a C runtime would end up with poor results as its heavily bent on the specifics of that task where a specialists' knowledge would outweigh any abstractive advantages.

One very useful experiment I did early on was try to solve a problem with GPT where I had deep domain expertise and see where the cracks are vs in one in which I had very poor expertise. This led me to make my abstraction based statement above, and so far I've seen it remain true with every successive version.


How do you know he’s progressing faster than otherwise? At least, for me, the yardsticks I have for how long it takes to learn to code are: (my own) self-learning experience, (my own) experience learning in the classroom, and then working with new students. If I mentored somebody one-on-one I guess I’d probably be surprised at how fast they learned.


I have some prior experience with mentoring beginners and juniors (I've done Google Summer of Code mentoring four times, participate in KDE's own mentor program, same at $dayjob, etc.). Of course not all students are created equal in the first place, so you have to trust my gut a bit in terms of compensating for that.

The velocity of task completion on tasks that are within reach of what he can figure out by "pair-programming" with the AI is very high. However, the failure modes are devastating - when he gets stuck, he gets stuck completely, which no idea what to do next. And ChatGPT can't assist with the overall development plan, or at least it's a lot harder to ask it about it. Some questions are difficult to ask without the hindsight afforded by experience.

With earlier students, pre-AI, the work got done more slowly but afforded many more little mental on-ramps for "what to do next", or at least ideas. Partly because the ability to read and browse code gets trained much more if you have to piece your solutions together via reference code and docs, vs. getting code handed to you by gen AI. If you can read/navigate a codebase more effectively, you are also more likely to be able to generate ideas what to touch next and why. Partly also because your muscle for trying things out and experimenting gets trained more if that's your only choice.

In sum, as a mentor, when a student gets stuck I usually have more to work with in the dialog that follows. Ideas to interrogate, experiments to brainstorm, assumptions to challenge. With the ChatGPT-assistent student, almost nada - I've "caught" (this is of course perfectly fair under our agreement) him leaning on ChatGPT to even have the convo with me, handing my messages/questions to the model and coming back with what it generated, asking me whether ChatGPT got it right or not. I wind up being the second opinion that corrects/checks the AI, not the student, who is mentally fairly disengaged from the process by that point.

What I'm getting out of this experiment is an idea of what kind of guidance I will need to give future mentees on how to use the AI tools appropriately for their own development.


That’s really interesting. You seem to be in a good position to make some useful observations.

My last semester working with students was a year ago, and we were aware that they were going to ChatGPT for things, but not really sure how to deal with it. It seems obvious that in the future these tools will play a part, but of course those of us who learned without them aren’t in a particularly good position to teach how to use them or to structure things around them. It is a temporary problem but a pretty big one, IMO.

I wonder if a school-sponsored GPT with monitoring from the teaching assistants could be part of the puzzle; it seems really neat: it sets the expectation more realistically (some AI tools will be used whatever the policy is, may as well be ours), and gives the teaching staff some insight into how the students are using it and what they are struggling with. Although, it would have to be a pretty state of the art model, you’d want the students to prefer it to their own… also, setting the expectations correctly (it isn’t authoritative, it is on you to double check it—awkward, for a school-provided tool).

Anyway, hopefully there are more folks out there like you, actively experimenting with this stuff.


Seems obvious to me that using generative ai to learn coding would be akin to going to the gym and using hydraulic machinery to lift the weights. You get it done, but get no benefits out of it.


I was reading over a student's code and he couldn't tell me his own variable names, couldn't even find one when I referred to it conceptually ("Where is the variable that holds your cache?".... no answer at all, not even a guess. Just, "I don't know.")

Having used GPT all year myself, I quit using it to generate new code for me for the most part. Back to StackOverflow, books, and of course reading boring documentation/manuals. I'm not closed off to the idea, just that I'm very worried it could atrophy certain skills.


I’ve done something similar lately, partly I started because I don't always have internet. But I also started to see the benefits of using manuals and conjure up things on my own.

It definitely feels harder to do, and doing small tweaks here and there is frustrating as you have to really understand everything. I however find it quite rewarding because I feel I become much more capable as a programmer, and I rely less on third party dependencies for everything. I come up with more original solutions and become more able to combine tools to solve different problems.


That's not surprising.

What would you expect from pupils if the teacher gave them all solutions when they ask?


I don't think "lazy" is the word I would use, but I can see how some would see it like that. In my mind, GPT (and LLM's) are just the next layer of abstractions on top of a already massive stack of technical abstractions. For a SWE, this stack of abstractions crosses so many levels for a simple "hello world" to work:

1. Software (my code) 2. OS (Linux, kernels, etc) 3. Hardware (3090, 5090x, etc) 4. Electrical (Where is my energy coming from? how is it produced?)

Each of these levels could be broken into another 10 abstractions: On the software level, some people may understand how their compiler is working, but could they program in binary? What about understanding how their program interacts with memory? What about the kernel of where their software is deployed in the cloud? Do they know how their software is deployed in the cloud? Could they build the production server rack that their container is deployed on? Obviously this gets a bit ridiculous the further down you go- it's impossible to have knowledge about every part of what makes your code work.

I think that when people use terms like "lazy" or say that knowledge is being lost with abstractions like GPT, they ignore the massive list of abstractions that allow them to be productive.

I'd guess my thesis is that newer/GPT-aided engineers don't necessarily have less understanding, but their knowledge might just be shifted by one level up on the abstraction stack.


It is impossible to explain just how much ChatGPT has given me confidence to branch out into unknown langs in our company! It has made me fairly lazy in terms of the language I know, and I automate just as much as I can, but it really (really) shines around things I absolutely have no idea about, and has provided so much value.


I think a lot about Simon Wilson saying (somewhere...) about how one of the major things AI unlocks is the ability "to be more ambitious with side projects". I can echo this 100%. Being able to go from 0 to _something_ pretty easily even on a wild hair for an idea is so empowering, and often the hurdle that feels most difficult to jump. Even if you don't continue, learning about some domain with a mostly-working example is incredibly powerful.


I have the same kind of thoughts often, but then I think to myself: is it a bad thing being lazier if I get more things done? I don't think so. Overall it's been net positive and that's what matters for me at the end of the day


I had a similar experience just today when trying to debug a script that serves as a connector between AWS Athena and our internal log querying platform. I got fed up with trying to understand a bunch of arcane logic and asked ChatGPT to write me a new one.

After a couple of back-and-forth rounds of copying and pasting error messages and sample data, I got the ChatGPT script working as a drop-in replacement. The new script is more readable, the logic is simpler, it took me less time to complete than either debugging the old script or writing a new one from scratch, and it was an overall more enjoyable experience.

There is little doubt in my mind that in the not so distant future we will gawk at the thought that humans used to write production code by hand. Sure, the artisans and the enthusiasts among us will still be around to keep the flame, but day coding will be a mostly automated endeavor.


I predict future problems that are not easily “solved” or at least aided by chatgpt will have too high of an “activation barrier” to tackle relative to other problems that can be helped by chatgpt

The next generation of thinkers will be shallow and won’t be able to or won’t want to think hard about problems by themselves.


Laziness is something that should be earned.

When you’re starting out, you should be doing things the hard way on purpose. Learn things the hard way, don’t look at “Learn X in Y days” type tutorials. Use simple tools. Write code by hand.


ChatGPT is an excellent sparring partner, for new, experienced and senior/ninja-elite developers alike.

It does not, however, provide any solutions all by itself:

1. A significant amount of code it suggests uses external APIs that, while it would be nice if they existed, are purely imaginary.

2. Even when suggesting sensible code using existing APIs, it will happily provide coding snippets that have nothing in common, style-wise, with the code base you asked questions about, even if you provided sufficient context.

3. Some code will be, even if you push back, wholesale lifted from sources whose license you simply can't comply with.

4. Even the most basic coding questions, like "give me a C# function to fold SMTP headers according to the RFC" are flat-out wrong, or, best-case, woefully inefficient.

So, whenever I use ChatGPT, it's entirely to see if there's a perspective that I missed. 80% of the cases, it's just babbling nonsense, and I happily disregard those results. The remaining 20% is quite valuable, though, even if separating the wheat from the chaff definitely involves my human judgement...


A bad painted blames his tools


whom does a good painter blame?


A good painter does not blame but reflects. That could be the whole sentence


Themselves.


Evidently we don't need ChatGPT to make us lazy. ChatGPT, could you correct the grammar in these posts?

    Sure, here are the corrected versions of the posts:
    
    ramon156 2 minutes ago [–]
    A bad painter blames his tools.
    
    pdyc 0 minutes ago | parent [–]
    Whom does a good painter blame?


The internet is a global communications platform. You may want to adjust your expectations re: immaculate English usage.


My expectations are normally on the floor, but when accusations of laziness start to fly I think it's 100% fair to note the wide discrepancy between the accusations on one hand and the effort on the other.


Sounds a bit like the complaints about digital calculators making people lazy instead of doing the calculations on paper or a slide-rule?


Strangely enough, people are lazy.

They fall for stupid claims which could be easily debunked with their smartphone.

Even simple calculations aren't done.

Many people forgot how to chew and only swallow.


If you use a calculator without knowing what addition or multiplication mean, then that's also a serious problem.


It sounds exactly like that. You are right.

Like teachers complaining about students using tables or sliding rules.

Or Socrates complaining writing makes people lazy to remember facts.


Similar to what SO did, but even more completely. I don't even bother with SO anymore.


I've learned so many things on SO though. For example, some amazing answers there made me realize the power of sed and awk, and motivated me to learn it.

Ironically, I think the greatest quality of SO is exactly what most people complain about: that sometimes when you ask how to do X, they will tell you that X is a bad idea and you should probably do Y. I've learned many more efficient ways to do what I was trying to do, good security practices, and so on, because of this culture. Whereas if you ask ChatGPT how to do X, it'll happily tell you how to do X, even if X is a bad idea. (As a bonus, it might make something up if X is impossible to do.)

Besides, ChatGPT's answers are mediocre, by the definition of the word: dead average. You'll never get some guru-level insight from ChatGPT that you would sometimes get from a particularly exceptional answer in SO.

Note: I don't mean to say that SO is perfect, there's plenty of bad answers and it has other problems too. I just think SO does more good than harm to novices who want to become better at the trade, whereas ChatGPT is downright harmful for learning.


I think SO is still necessary for nuanced discussion. Gpt4 still lacks domain experience. It's just another tool in the toolbelt. I use it more than the other tools but each tool is still invaluable.


Always been


I mean, sure. Sometimes I just paste a timestamp in and ask it to turn it into an epoch time for me. Now that's lazy.


"How StackOverflow made me lazy"

"How Google made me lazy"

"How Internet made me lazy"

...

And so on


Maybe its intentional, but you're actually making the opposite point of what you think, in my view?

All of the best programmers I know do not copy code, they try to understand first, and then apply what they learned. In this way, they use ChatGPT, SO, google, etc. the same amount for copy pasting code: Pretty much not at all.


Good engineers reflect and avoid blaming. Generalizations and tribal thinking are bad habits. Stackoverflow, the internet, and a keyboard are tools to be mastered. Good engineers prefer to think for themselves no matter the tool they are using.


> Generalizations and tribal thinking are bad habits.

If generalizations are bad, why do you immediately make one about tribal thinking (not to mention 'generalizations' itself)?

Tribal thinking isn't always bad; it's the same glue that holds families together.


But Stackoverflow and especially ChatGPT enable bad engineers to pose as good ones.


So, the issue is more about the Social appearance of a successful engineer, not what good engineers are and how they behave.


The issue is that good engineers are harder to find because of bad engineers with better tools.

ChatGPT is like doping in sports.


Do you know how many errors are simply because of C&P from the internet?

I would estimate for every one I would estimate that for every person who really benefits from such instruments, there are at least 10 who simply C&P without really understanding anything.


Once I know something sufficiently well, I basically don't use stackoverflow or ask for help ever.


Yes, that is true for most engineers. Good engineers are constantly practicing their thinking skills no matter what, which can often cause them to outgrow their current tools and ways of reasoning.

The next step for many is to contribute and make the path easier for others. Enabling lazy people to outgrow themselves can push boundaries and drive progress

Circle of life I guess.


Some of the people in these comments never got their documentation in paper form and it really shows


"How Manuals made me lazy"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: