Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? Less because we use computers over pen and paper? And so on. You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
> Do we think less because we use C++ vs assembly? Less because we use assembly over punching cards? ...
Apologists love to make such analogies. "From 30,000 feet, doesn't the new things kinda look like some old thing? Then they're the same and you should accept the new thing!" But the analogies are never apt, and the "argument" is really only one of glossing over the differences.
The whole point of AI is for people to think less. It's basically the goddamned name. If people aren't thinking less, AI isn't doing its job. All of those things you listed are instances of mechanical translation, and aren't thinking..
> You can put a strong local coding model on your local hardware today and no investor will be involved (unless you mean investors on the company you work for, but the truth is,n those were never in any way interested in how you build things, only that you do).
Don't pretend you can cosplay a capitalist with AI. You need money, and if you can build something with a local model, the people with money can do it too, so they don't have to pay you. We work for a living.
Also it's a fantasy your local model with be anything but a dim candle to the ones the rich have. Real life is not a sci-fi novel.
Your employers are hoping to use you up making this current imperfect iteration of the technology work, because the hope is the next version won't need you. Don't be cheerful about it. It's a bad end for you.
You say that with such conviction "the whole point is to think less". Why do you think that? I think no less now that I use AI agents all day long, I just think about different things. I don't think about where I place certain bits of code, how certain structures look like. Instead I think about data models, systems, and what the ideal deliverable looks like and how we can plan its implementation and let it execute by the willing agent. I think about how I best automate flows so that I can parallelize work, within a harness that reduces the possibilities for mistakes. I think a whole lot more about different technologies and frameworks, as the cost of exploring and experimenting with them has come down tremendously.
Will what I do now be automated eventually or before long? Probably, we keep automating things, so one has to swim up the abstraction layers. Doesn't mean one has to think less.
> To use them well you still need to know everything - whenever you prompt lazily you're opening yourself up to a fuckton of technical debt.
> That might be acceptable to some, but is generally a bad idea
That's my point: to use them effectively you need to know everything, but to use them heavily puts you in a situation where that knowledge atrophies (e.g. the OP's statement "every time I use AI, I feel like I'm getting a little dumber"). The bosses want the results now, and don't mind if in a few years you're much less capable (and maybe not capable of getting effective results anymore).
If AGI comes soon enough, their bet will have paid off.