> > My gut makes me worry for them, but I wonder where the truth really lies.
> Why worry?
Because, instead of telling people that "it's all a bubble," while he might be partially correct, he is still creating a confirmation bubble following. He is creating a denialist community, where as his followers might be best served by learning how to use the tools.
I am not sure about any of this.
Why worry? Because if he is wrong, then there is a chance that we will be killing the animals in our zoos, to feed the people. This is something that really happened during the last "great depression."
I worry about the plight of my fellow man as it affects me.
> Because, he tells people that "it's all a bubble," while he might be partially correct, he is still creating a confirmation bubble following. He is creating a denialist community, where as his followers might be best served by learning how to use the tools.
I still don't see what the big deal is. If LLMs are (or become) all they're cracked up to be, it shouldn't matter whether someone "learns to use the tools" today or tomorrow or five years from now. In fact they should become much easier to use as they become more intelligent, you shouldn't need all these fancy prompting strategies anymore.
(Reminds me of search engines. People who really knew how to search for things honed that skill over a period of time, only for those skills to become irrelevant now that search engines are much smarter.)
I guess my point is - why must the technology insist upon itself? Evangelizing for people to use it when they don't want to, just sounds cultish. If it's useful, people will eventually use it - like any other new technology. If someone doesn't find it useful yet, maybe they just don't work in a field that AI is good at yet.
I agree with what you’ve said. If you want to see what is cultish, look at the subreddit the post is talking about. It’s denialism and not healthy behavior.
Why worry?
Also I'm pretty sure I have seen a similar comment before