Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
AI is only getting better at consuming energy and wasting people's time communicating with this T9. However, if talented engineers continue to use it, it might eventually provide more accurate replies as a result.
Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.
There’s nothing incongruent about that pairing (though I also think you’re not being entirely fair in describing what your parent comment said). Atom bombs also fit: They are basically useless and they are so powerful that they can destroy humanity.
With LLMs, the destruction is less immediate and overt, but chatbots do provable harm to people, and can be manipulated to warp our sense of reality.
> Let's ask your friendly local Ukrainian refugee about that.
You understand “basically useless” does not mean “entirely useless”, right? That’s why the word “basically” is there.
I know Ukrainian people. I know Ukrainian people who are in attacked cities right now. They are friendly, and all of them would understand my point.
> So the only permissible technologies are those suitable for use by children and the mentally disturbed. I see.
That is a bad faith argument. HN rules ask you to not do that and steel man. It is obvious that is not what I said, “permissible” isn’t part of the argument at all. And if you think one needs to be “mentally disturbed” to be affected, you are high on arrogance and low on empathy and information. There are numerous stories of sane people becoming affected.
Wait'll you hear about Dungeons & Dragons! As if backwards masking in rock and roll music weren't enough.
You're right, I don't have much empathy for bullshit pop-psych as an instrument of motivated reasoning. If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you. Either that, or you were an unsupervised child, victimized not by a chatbot but by your parents. A tragedy either way, but good faith requires us to place the blame where it's actually due.
> Wait'll you hear about Dungeons & Dragons! As if backwards masking in rock and roll music weren't enough.
All ask you again to not engage in bad faith.
> If ChatGPT can convince you to kill yourself, you weren't mentally healthy to begin with, and something else would have eventually had the same effect on you.
> Research has shown suicidal thinking is often short-lived. Those who attempted suicide from the Golden Gate Bridge and were stopped in the process by a person did not go on to die by suicide by some other means. There are also a variety of examples that show restricting means of suicide have been associated with the overall reduction of it.
So now we've moved on to the topic of nets on bridges. Okey-dokey, then.
You started by comparing ChatGPT to thermonuclear weapons, inferring that it's a useless thing yet also an existential threat to humanity. State your position and desired outcome. You're all over the place here.
That's a dishonest framing of their argument. There's nothing logically inconsistent in believing wide adoption of AI tools causes developers' skills to atrophy and that the tools also fail to deliver on the hype/promises.
You're inserting "destroy humanity" when OP is suggesting the problem is offloading all thinking to an unreliable tool (I don't entirely agree with their position but it's defensible and not as you stated).
There's no point arguing with someone who's not only wrong, but who doesn't care if they're wrong. ("I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.")
There are basically no conditions under which one party can or will reach a legitimate common ground with the other. Sucks, but that's HN nowadays.
There is common ground, as per my initial message. Only one AI company spends billions of dollars yearly on marketing their software to make it work. I work on open-source software development on a bootstrapped basis.
My input is: water, nutrition, a bit of electricity, and beliefs and the output is a fairly complex logical system like software. AI's input is billions of dollars, hundreds of thousands of people's lives spent in screen time daily, gigawatts of electricity, and still produces very questionable results.
To answer your question in other words: if you spent the same amount of resources on human intelligence, it might bring much more impressive results in one year. However, taking into account the resources already paid into these AI technologies, humanity is unlikely to have a chance to buy out of this new 'dependency'.
To answer your question in other words: if you spent the same amount of resources on human intelligence
If AI tools don't amplify and magnify your own intelligence, it's not their fault.
If the advances turn out to be illusory, on the other hand, they'll be unwound soon enough. We generally don't stick with expensive technology that doesn't work. At the same time, fortunately, we also don't generally wait for your approval before trying new things.
In that year, AI will get better. Will you?