I always assumed LLMs would result in more bloated content (stupid interns) but I think you’re right that it’ll lead to more efficient prioritization (hooray interns)
The problem is that there’s always a communication cost to outsourcing something. And you can’t outsource a whole project to an LLM, like a human intern. You’re just outsourcing one micro-, sub-task after another. To use the human analogy, it’s more like you’re standing over their shoulder and telling them what function to write, one after the other. And they’re SUPER fast with small, pure functions, but they get confused with anything else.
Is that a faster way to program?
Maybe? If you can fluidly decompose things into small functions in your head and the problem can be solved that way?
I don’t know though, I find myself using chatGPT for bigger meta questions much more frequently than the Copilot autocomplete
I can’t remember the exact phrase but McLuhan talks about the “literate man” (or something) in a way that makes it sound like we have modern brain software, not better or worse, merely adapted for a certain environment.
I like the idea that we’ve lost touch with certain faculties, and they’re still there, waiting to be rediscovered, if only we can cut through the all the software that we’ve installed for writing book reports.
Interestingly - I also have synesthesia like the parent article's subject did, although likely not as pronounced. Not much is known about how that works, I think, I've tried to learn more over the years about it and always run into pseudoscience pretty quickly.