Hacker Newsnew | past | comments | ask | show | jobs | submit | croemer's commentslogin

Not attributed to. The FDA wording says "associated with" which is much weaker causally.

I can guarantee you, from my personal experience of being diabetic for 30 years, that every day—and in the most incredible ways—I have managed to “almost kill myself.” Whether when I used finger-prick testing, sensors, injecting insulin with pens, or managing insulin with a pump. Our life is always a delicate balancing act between too little, too much, and way too much—the kind where this time I really kick the bucket

By personal choice I use a commercial CGM (if I could “touch it,” I’d be firmly on the side of certainty about killing myself through sheer stupidity), but reading something like “associated with” really makes me angry. Before making such subtle insinuations about the open-source world (the source of the revolution of the last 10 years in this field), regulatory bodies should open their eyes to what is actually happening with the quality of current sensors and the real problems they are causing.


Thank you.

And strength to you. I had a business partner for some time that was much like you and every time he'd be 10 minutes late for an appointment I'd get nervous and if it was more than an hour I'd be on the phone to his family to check up on him.


I've also found pre-commit useful to prevent CI failure. If one runs into edge cases, one can switch it off.

You mean the 5 minutes is insane, right?

I spotted at least 3 typos in the first minute. Typos are really easily detected and fixed with LLMs (one really good usage of them).

But it's nice to have non-LLM written text. Still the many typos are annoying and distracting.


> As of November 14, 2025, Abbott has reported 736 serious injuries, and seven deaths associated with this issue.

It's a stretch to go from "associated with 7 deaths" to "killed 7 people". These devices are worn by millions. So coincidental deaths will happen irrespective of causality.

Would be good to have more details on the cases. Kind of hard to see how low readings would cause deaths. You eat, then notice things don't go up, then do a finger stick test and notice it's off.

To die you'd have to end up with ketoacidosis - there are ways to notice. Sure it's bad to have falsely low values but very unlikely to kill.


What is Dasharo and NovaCustom?

NovaCustom is the EU version of a https://puri.sm laptop. it's got an open boot loader and a TPM for supporting OS's that require it like Windows 11.

Dasharo is pre-installed coreboot

LLM slop. At least one clear error (hallucination): "’Twas the night before Christmas, and I was doing the least festive kind of work: staring at serialization"

Per disclosure timeline the report was made on December 4, it was definitely not the night before Christmas when you were doing the work then.


Security research often looks dramatic from the outside. In reality, it is usually the mundane work of asking AI to make up dramatic stories

If you had all the token probabilities it would be bijective. There was a post about this here some time back.

Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.

It's definitely LLM generated. I came here to post that, then you saw you had already pointed it out. Giveaway for me: 'The most common real-world path here is not “attacker sends you a serialized blob and you call load().” It’s subtler:'

It's not, it's; bolded items in list.

Also no programmer would use this apostrophe instead of single quote.


> Also no programmer would use this apostrophe instead of single quote.

I’m a programmer who likes punctuation, and all of my pointless internet comments are lovingly crafted with Option+]. It’s also the default for some word processors. Probably not wrong about the article, though.


2 typos in first sentence. Is this on purpose to make it obviously not-AI generated?

"apology peice" and "tail caling"


If you want to make your writing appear non-AI generated, the easiest way is to write it yourself. No typos necessary.

I’m sure with enough cajoling you can make the LLM spit out a technical blog post that isn’t discernibly slop - wanton emoji usage, clichés, self-aggrandizement, relentlessly chipper tone, short “punchy” paragraphs, an absence of depth, “it’s not just X—it’s a completely new Y” - but it must be at least a little tricky what with how often people don’t bother.

[ChatGPT, insert a complaint about how people need to ram LLMs into every discussion no matter how irrelevant here.]


> If you want to make your writing appear non-AI generated, the easiest way is to write it yourself. No typos necessary.

You can ask the AI to make typos for you.


Woops, thanks for noticing, fixed!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: