I have some prior experience with mentoring beginners and juniors (I've done Google Summer of Code mentoring four times, participate in KDE's own mentor program, same at $dayjob, etc.). Of course not all students are created equal in the first place, so you have to trust my gut a bit in terms of compensating for that.
The velocity of task completion on tasks that are within reach of what he can figure out by "pair-programming" with the AI is very high. However, the failure modes are devastating - when he gets stuck, he gets stuck completely, which no idea what to do next. And ChatGPT can't assist with the overall development plan, or at least it's a lot harder to ask it about it. Some questions are difficult to ask without the hindsight afforded by experience.
With earlier students, pre-AI, the work got done more slowly but afforded many more little mental on-ramps for "what to do next", or at least ideas. Partly because the ability to read and browse code gets trained much more if you have to piece your solutions together via reference code and docs, vs. getting code handed to you by gen AI. If you can read/navigate a codebase more effectively, you are also more likely to be able to generate ideas what to touch next and why. Partly also because your muscle for trying things out and experimenting gets trained more if that's your only choice.
In sum, as a mentor, when a student gets stuck I usually have more to work with in the dialog that follows. Ideas to interrogate, experiments to brainstorm, assumptions to challenge. With the ChatGPT-assistent student, almost nada - I've "caught" (this is of course perfectly fair under our agreement) him leaning on ChatGPT to even have the convo with me, handing my messages/questions to the model and coming back with what it generated, asking me whether ChatGPT got it right or not. I wind up being the second opinion that corrects/checks the AI, not the student, who is mentally fairly disengaged from the process by that point.
What I'm getting out of this experiment is an idea of what kind of guidance I will need to give future mentees on how to use the AI tools appropriately for their own development.
That’s really interesting. You seem to be in a good position to make some useful observations.
My last semester working with students was a year ago, and we were aware that they were going to ChatGPT for things, but not really sure how to deal with it. It seems obvious that in the future these tools will play a part, but of course those of us who learned without them aren’t in a particularly good position to teach how to use them or to structure things around them. It is a temporary problem but a pretty big one, IMO.
I wonder if a school-sponsored GPT with monitoring from the teaching assistants could be part of the puzzle; it seems really neat: it sets the expectation more realistically (some AI tools will be used whatever the policy is, may as well be ours), and gives the teaching staff some insight into how the students are using it and what they are struggling with. Although, it would have to be a pretty state of the art model, you’d want the students to prefer it to their own… also, setting the expectations correctly (it isn’t authoritative, it is on you to double check it—awkward, for a school-provided tool).
Anyway, hopefully there are more folks out there like you, actively experimenting with this stuff.
Seems obvious to me that using generative ai to learn coding would be akin to going to the gym and using hydraulic machinery to lift the weights. You get it done, but get no benefits out of it.
The velocity of task completion on tasks that are within reach of what he can figure out by "pair-programming" with the AI is very high. However, the failure modes are devastating - when he gets stuck, he gets stuck completely, which no idea what to do next. And ChatGPT can't assist with the overall development plan, or at least it's a lot harder to ask it about it. Some questions are difficult to ask without the hindsight afforded by experience.
With earlier students, pre-AI, the work got done more slowly but afforded many more little mental on-ramps for "what to do next", or at least ideas. Partly because the ability to read and browse code gets trained much more if you have to piece your solutions together via reference code and docs, vs. getting code handed to you by gen AI. If you can read/navigate a codebase more effectively, you are also more likely to be able to generate ideas what to touch next and why. Partly also because your muscle for trying things out and experimenting gets trained more if that's your only choice.
In sum, as a mentor, when a student gets stuck I usually have more to work with in the dialog that follows. Ideas to interrogate, experiments to brainstorm, assumptions to challenge. With the ChatGPT-assistent student, almost nada - I've "caught" (this is of course perfectly fair under our agreement) him leaning on ChatGPT to even have the convo with me, handing my messages/questions to the model and coming back with what it generated, asking me whether ChatGPT got it right or not. I wind up being the second opinion that corrects/checks the AI, not the student, who is mentally fairly disengaged from the process by that point.
What I'm getting out of this experiment is an idea of what kind of guidance I will need to give future mentees on how to use the AI tools appropriately for their own development.