I think their point is that, in general, social-scale healthcare is an under-solved problem in practice and LLMs have potential to improve a significant portion these challenges by increasing accessibility to treatment. The availability of these tools will inevitably lead to more instances of reports like this (from the report the article is based on):
> This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
However I don't see this single negative instance of a vast social-scale issue as much more than fear/emotion-mongering without at least MENTIONING that LLM also have positive effects. Certainly, it doesn't seem like science to me. Unless these models are subtly leading otherwise healthy and well-adjusted users to unhealthy behavior, I don't see how this interaction with artificial intelligence is any different than the billions of confirmation-bias pitfalls that already occur daily using google and natural stupidity. From the article:
> The case also raises broader concerns about the growing role of generative AI in personal health decisions. Chatbots like ChatGPT are trained to provide fluent, human-like responses. But they do not understand context, cannot assess user intent, and are not equipped to evaluate medical risk. In this case, the bot may have listed bromide as a chemical analogue to chloride without realizing that a user might interpret that information as a dietary recommendation.
It just seems they've got an axe to grind and no technical understanding of the tool they're criticizing.
To be fair, I feel there's much to study and discuss about pernicious effects of LLMs on mental health. I just don't think this article frames these topics constructively.