Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>his exact interaction with ChatGPT remains unverified

there is no informational point to this article if the entire crux is 'the patient wanted to eat less "chloride" and claims ChatGPT told him about Sodium Bromide". based on this article, the interaction could have been as minimal as the guy asking for the existence of an alternative salt to sodium chloride, unqualified information he equally could have found on a chemistry website or wikipedia



Yeah you found the paragraph where they highlight that there don’t know what interaction with ChatGPT gave him that information. The reason they’re sharing the anecdote is because there might be a new trend developing in medicine where people go to the ED after taking advice from an LLM that leads to injury and maybe screening questions should include asking about that.


and yet this doesn't change the fact that they wrote an entire medical article the crux of which is little more than hearsay. "did you get advice from an LLM?" is far less relevant and all-catching a question here than "have you made any dietary changes recently?" and yet the article isn't about that, because odd dietary changes aren't the attention-grabbing topic right now. I imagine you could find thousands of similar stories where the culprit was google or facebook or youtube instead of an LM, and yet nothing needs to be changed for them because they too can be covered with a question akin to "have you made any dietary changes recently?"


If there was a guy out there driving around selling bromide tablets to people as a substitute for dangerous chloride in your biochemistry I think asking if you’ve bought anything from the back of a wagon is a reasonable response.

Doctors as a group often try to solve health problems by looking for societal trends. It’s how a lot of diseases get spotted. They’re not saying that using an LLM is the dangerous thing, they’re saying there might be some correlation between soliciting advice from the machine and unusual conditions and it merits further study, so please ask your patients.


touché


And if Wikipedia didn't warn that Sodium Bromide was poisonous, would that not be irresponsible? Chemistry websites seem different because, presumably, their target audience is chemists who can be trusted not to consume random substances.



And yet, when you click through, it says

> NaBr has a very low toxicity with an oral LD50 estimated at 3.5 g/kg for rats.[6] However, this is a single-dose value. Bromide ions are a cumulative toxin with a relatively long biological half-life (in excess of a week in humans): see potassium bromide.

At no point does the paragraph you linked suggest it's safe to substitute NaCl with any other sodium salt.


first of all, the average idiot is going read that sentence and switch their brain off after hearing "has a very low toxicity", it's hardly a ringing alarm. second, this is quote from clicking through to the sodium bromide page, not the page I linked listing sodium salts. the parallel here would be asking chatgpt to list sodium salts, which is almost certainly what he did, and then clicking through again would be the equivalent of asking for further information about that salt, which it seems likely he did not do

and I sincerely doubt that ChatGPT said anything about it being safe to substitute for NaCl


See my quote from the underlying clinical report in this comment: https://news.ycombinator.com/item?id=44888300


having tried it quite a few times with quite a few variations, without making it extremely clear that I was talking in a sense of chemistry rather than dietary, I was unable to get ChatGPT to give anything other than a long list of edible salts

essentially I think it's telling that there are zero screenshots of the original conversation or an attempted replication in the article or the report, when there's no good reason that there wouldn't be. I often enjoy reading your work, so I do have some trust in your judgment, but this whole article strikes me as off, like the people behind it have been waiting for something like this to happen as an excuse to jump on it and get credit, rather than it actually being a major problem


Why would medical professionals mislead on this though?

It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.

When asked why, they said ChatGPT told them it was a replacement from chloride.

Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.


> It seems factual that this person decided to start consuming bromine and it had an adverse effect on them.

certainly

> Why would medical professionals mislead on this though?

I'm not suggesting it's intentional, but: to get credit for it; or because it's something they'd been consciously or subconsciously expecting and they're fitting to that expected pattern

>When asked why, they said ChatGPT told them it was a replacement from chloride. Maybe the patient lied about that, but it doesn't seem out of the realms of possibility to me.

of course it's not impossible, it's not even particularly unlikely, but, if we're going to use a sample size of 1 like this, then surely we want something a bit more concrete than the unevidenced claim of a patient recently psychotic?

more broadly though, this isn't so much a chatgpt issue as it is an educational dietary issue. the patient seems to have got a funny idea about the health effects of salt, likely from traditional or social media, and then he's tried to find an alternative. whether the alternative was from ChatGPT, or Wikipedia, or other, doesn't seem very relevant to me




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: