Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> OTOH, the cost to this progress not having been earned is pretty high, too: He takes a lot of AI-generated boilerplate for granted now without understanding it or the concepts behind it, so when the AI gets it wrong or forgets it, he is unable to notice what's missing.

How materially different is this from "copy-pasted boilerplate from an example on a website that isn't fully understood"?

I've personally found one of the biggest advantages for learning a new stack with ChatGPT is being able to say "hey, how do I modify this boilerplate for [specific piece of functionality]" or "hey, I have this code and I'm getting this error, what should I try" vs just trying to find websites with other examples of slightly-different boilerplate or trying to start from square 1 (which would often mean dedicating days or weeks to less-immediately-relevant tutorial foundation projects).



> How materially different is this from "copy-pasted boilerplate from an example on a website that isn't fully understood"?

When finding and copying, you have to employ at least some degree of critical thinking. The result is rarely the first on the search results, and usually cannot be used without some adaptation. Generated solutions usually require less adaptation.


> When finding and copying, you have to employ at least some degree of critical thinking.

You and I have workers with vastly different frontend "engineers" in that case, especially in web agency shops where "faster implemented === better" in most cases.


Haha. But taking the statement seriously: Do you think the engineers in question employ more of their critical thinking capacity when using code from a generator than when adapting existing snippets?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: