> OpenAI's sales and marketing expenses increased to _$2 billion_ in the first half of 2025.
Looks like AI companies spend enough on marketing budgets to create the illusion that AI makes development better.
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
Well. I was a sceptic for a long time, but a friend recently convinced me to try Claude Code and showed me around. I revived an open source project I regularly get back to, code for a bit, have to wrestle with toil and dependency updates, and loose the joy before I really get a lot done, so I stop again.
With Claude, all it took to fix all of that drudge was a single sentence. In the last two weeks, I implemented several big features, fixed long standing issues and did migrations to new major versions of library dependencies that I wouldn’t have tackled at all on my own—I do this for fun after all, and updating Zod isn’t fun. Claude just does it for me, while I focus on high-level feature descriptions.
I’m still validating and tweaking my workflow, but if I can keep up that pace and transfer it to other projects, I just got several times more effective.
This sounds to me like a lack of resource management, as tasks that junior developers might perform don't match your skills, and are thus boring.
As a creator of an open-source platform myself, I find trusting a semi-random word generator in front of users unreliable.
Moreover, I believe it creates a bad habit. I've seen developers forget how to read documentation and instead trust AI, and of course, as a result AI makes mistakes that are hard to debug or provokes security issues that are easy to overlook.
I know this sounds like a luddite talking, but I'm still not convinced that AI in its current state can be reliable in any way. However, because of engineers like you, AI is learning to make better choices, and that might change in the future.
> as tasks that junior developers might perform don't match your skills, and are thus boring.
Yeah this sounds interesting, and matches my experience a bit. I was trying out AI for the Christmas cuz people I know are talking about it. I asked it to implement something (refactoring for better performance) that I think should be simple, it did that and looks amazing, all tests passed too! When I look into the implementation, AI got the shape right, but the internals were more complicated than needed and were wrong. Nonetheless it got me started into fixing things, and it got fixed quite quickly.
The performance of the model in this case is not great, perhaps it is also because I am new to this and don't know how to prompt it properly. But at least it is interesting.
This sounds a lot like the classic "the way to get a good answer on the internet is to post a wrong answer first", but in reverse - the AI gives you a bad version which trolls you into digging in and giving the right answer :-)
I think AI coding should not be permitted in the first two years of training in CS. One should have to learn the basics of reading quality documentation, creating quality code and documentation, learning how the different pieces of software work together, and learning how to work with others.
LLMs are great for people with some idea of what they're doing, and need "someone else" to pair program with. I agree it will cripple the architectural thinking of new learners if they never learn how to think about code on their own.
That’s a totally fair take IMHO, and I’m very much conflicted on several ends on this topic—for example, would I want my juniors to use an agent? No; not even the mid levels, probably. As you say, it’s easy to form bad habits, and you need a good intuition for architecture and complexity, otherwise you end up with broken, unmaintainable messes. but if you have that, it’s like magic.
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
AI is only getting better at consuming energy and wasting people's time communicating with this T9. However, if talented engineers continue to use it, it might eventually provide more accurate replies as a result.
Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.
There’s nothing incongruent about that pairing (though I also think you’re not being entirely fair in describing what your parent comment said). Atom bombs also fit: They are basically useless and they are so powerful that they can destroy humanity.
With LLMs, the destruction is less immediate and overt, but chatbots do provable harm to people, and can be manipulated to warp our sense of reality.
> Let's ask your friendly local Ukrainian refugee about that.
You understand “basically useless” does not mean “entirely useless”, right? That’s why the word “basically” is there.
I know Ukrainian people. I know Ukrainian people who are in attacked cities right now. They are friendly, and all of them would understand my point.
> So the only permissible technologies are those suitable for use by children and the mentally disturbed. I see.
That is a bad faith argument. HN rules ask you to not do that and steel man. It is obvious that is not what I said, “permissible” isn’t part of the argument at all. And if you think one needs to be “mentally disturbed” to be affected, you are high on arrogance and low on empathy and information. There are numerous stories of sane people becoming affected.
Wait'll you hear about Dungeons & Dragons! As if backwards masking in rock and roll music weren't enough.
You're right, I don't have much empathy for bullshit pop-psych as an instrument of motivated reasoning. If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you. Either that, or you were an unsupervised child, victimized not by a chatbot but by your parents. A tragedy either way, but good faith requires us to place the blame where it's actually due.
> Wait'll you hear about Dungeons & Dragons! As if backwards masking in rock and roll music weren't enough.
All ask you again to not engage in bad faith.
> If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you.
> Research has shown suicidal thinking is often short-lived. Those who attempted suicide from the Golden Gate Bridge and were stopped in the process by a person did not go on to die by suicide by some other means. There are also a variety of examples that show restricting means of suicide have been associated with the overall reduction of it.
So now we've moved on to the topic of nets on bridges. Okey-dokey, then.
You started by comparing ChatGPT to thermonuclear weapons, inferring that it's a useless thing yet also an existential threat to humanity. State your position and desired outcome. You're all over the place here.
That's a dishonest framing of their argument. There's nothing logically inconsistent in believing wide adoption of AI tools causes developers' skills to atrophy and that the tools also fail to deliver on the hype/promises.
You're inserting "destroy humanity" when OP is suggesting the problem is offloading all thinking to an unreliable tool (I don't entirely agree with their position but it's defensible and not as you stated).
There's no point arguing with someone who's not only wrong, but who doesn't care if they're wrong. ("I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.")
There are basically no conditions under which one party can or will reach a legitimate common ground with the other. Sucks, but that's HN nowadays.
There is common ground, as per my initial message. Only one AI company spends billions of dollars yearly on marketing their software to make it work. I work on open-source software development on a bootstrapped basis.
My input is: water, nutrition, a bit of electricity, and beliefs and the output is a fairly complex logical system like software. AI's input is billions of dollars, hundreds of thousands of people's lives spent in screen time daily, gigawatts of electricity, and still produces very questionable results.
To answer your question in other words: if you spent the same amount of resources on human intelligence, it might bring much more impressive results in one year. However, taking into account the resources already paid into these AI technologies, humanity is unlikely to have a chance to buy out of this new 'dependency'.
To answer your question in other words: if you spent the same amount of resources on human intelligence
If AI tools don't amplify and magnify your own intelligence, it's not their fault.
If the advances turn out to be illusory, on the other hand, they'll be unwound soon enough. We generally don't stick with expensive technology that doesn't work. At the same time, fortunately, we also don't generally wait for your approval before trying new things.
Looks like AI companies spend enough on marketing budgets to create the illusion that AI makes development better.
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.