Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Practice Fusion to pay $145M for taking kickbacks aimed at increasing opioid use (fiercehealthcare.com)
71 points by JshWright on Jan 28, 2020 | hide | past | favorite | 38 comments


> In separate civil settlements, Practice Fusion has agreed to pay approximately $118.6 million to the federal government and states to resolve allegations that it accepted kickbacks from the opioid company and other pharmaceutical companies and also caused its users to submit false claims for federal incentive payments by misrepresenting the capabilities of its EHR software.

> Of that $118.6 million, $113.4 million will be paid to the federal government and up to $5.2 million to states that opt to participate in separate state agreements.

So, nothing was taken from the personal bank accounts of the people who made these decisions, and nobody went to jail. That is to say, nothing was done about this.

The takeaway here is that you can murder people for money and as long as you do it behind the paperwork of a large corporation, you can get away with it.

The only people who pay the consequences are shareholders. Sure this incentivizes shareholders to try not to invest in sociopathic execs, but I'm not sure how possible that even is.


It's hard to put into words how angry I am about this. These sorts of things don't happen by accident. I don't understand how someone gets asked to build this "feature" and they don't just quit on the spot...


Because when an engineer gets a big list of the alerts he or she must implement some aren't labeled "*illegal kickback scheme to get doctors to prescribe medically unnecessary medication for profit," it looks like any other requirement. Making things look legit to others outside of the scheme is important to successfully pulling off the scheme.

Even if an individual engineer suspected the opioid recommendation alerts were medically unnecessary, medically unnecessary tests and treatments are very common [1], so it wouldn't exactly stand out as "probaby an illegal kickback scheme."

[1] https://www.propublica.org/article/unnecessary-medical-care-...


As a software engineer: a lot of what I do is understanding the context of what I'm doing. When I'm writing code, I'm trying to grok how the business users think about the business objects and business logic.

I get a story that says, "As a user, when I press the H key I want a horn to play." And I go into the planning meeting and ask, "Is this like a bike horn?" And they say yeah! And I ask, okay, so when I press the key, do you want the horn to sound on key down, or key up? And they say, "How about it starts playing on key down, and stops playing on key up?" And then I say, "do you want different volumes, for let's say when they are on their bike and just want to politely let someone know they are there, versus when they're about to get hit by a truck?" And they say, "yes, eventually, but being able to play the horn at different notes is a higher priority." And then I ask, "wait, why is that useful for a bike horn?" And then they explain that they're going to be playing the horn as part of an orchestra. And then by the end of the conversation I realize the horn I'm implementing is a French horn midi keyboard, not a bicycle horn traffic signal, and the requirements are completely different.

I don't really buy this idea that people are just writing single, isolated requirements out of context.


Well, first off it's not a "single, isolated requirements out of context."

From TFA

>in exchange for implementing clinical decision support (CDS) alerts in its EHR software designed to increase prescriptions for their drug products.

When and how the software would display the CDS alerts would be discussed in a planning meeting; the logic of each individual CDS alerts would not, those would be handed down from somewhere else. Software engineers are in no position to be suggesting or discussing individual clinical decisions. Software engineers aren't going into a planning meeting saying "hey, what about suggesting scheduling a pap test if the patient is a female between 21 and 60 and there's no record of a recent test?" That's not in the software engineer's purview.

So there's very little doubt in my mind that the "suggest opioids if XYZ" CDS alert came down in a list of other innocent CDS alerts like "suggest pap test if XYZ" or "suggest mammogram if XYZ" or "possible drug interaction detected" or "tetanus vaccine due."

Of course there's also the possibility that the system was engineered so that the person configuring the software builds all the CDS alerts themselves with a drag-and-drop interface.

But anyway, sometimes you, as an engineer implementing the system, don't get to personally participate in requirements gathering, planning meetings, or interfacing with the end users. Depending on a lot of factors, sometimes that's done by pointy haired bosses at a much higher level. Especially if you're just a contractor or a subcontractor. I once worked on a project as a short term subcontractor (around six months), I was literally handed detailed requirements, including mockups of all the screens, and told "implement this." Of course, everyone had their own opinion on how it should work, but it was not open for discussion.

So I have absolutely no reason to believe that the programmers of this software knew about this nefarious kickback scheme but I have reason to believe that they probably didn't.


> Software engineers are in no position to be suggesting or discussing individual clinical decisions.

Boy am I glad I work for an EHR that doesn't agree with you... If I was asked to implement a CDS rule that wasn't based on some reasonable clinical guideline (CMS quality measures, etc) you'd better believe I'd push back on it and want to understand where it was coming from.

Healthcare isn't some dark art that engineers couldn't possibly understand. These rules should be based on publicly available and widely agreed upon standards of care.


The Practice Fusion ones almost certainly would fit your crieria. Ie “after [x] type of surgery, fentanyl and oxycontin are reasonable painkillers”. (Both entirely legal and widely prescribed today)


No one should be getting extend release opioids after a surgery...


> Because when an engineer gets a big list

Let’s not be naive. The list is big. But elements are incriminating. If you implement them, you are morally—and eventually, legally—responsible.


Working in the Bay Area, I'm sure I've had more colleagues who would implement this than would not. That's even though I've specifically chosen to work at companies that emphasize the importance of social good in their business.

Most people come to work for a paycheck, and the perceived cost to them of shaking up that paycheck (even just to get a new job) is very high.


How do you make the cost for them lower to make the right decision? Or make the amount of pain higher for making the immoral one?


How about A/B testing for increases in usage, ads-clicks, subscription signups, and lack of subscription cancellations? How about PR consisting pretty much of finding favorable journalists who will provide positive pre-vetted coverage of feature launches? How about VC-fueled price-dumping to squash competition and build a monopoly? How about paying phone manufacturers, browsers, etc. to become preloaded apps or default search engines?

The impacts aren't nearly as harmful in tech as in pharmaceuticals, but there are a lot of practices in our industry which we all accept as normal procedure but would seem very ethically questionable to an outsider.


These alerts were probably configured on some admin panel that was already built and didn’t require an engineer to be involved.


That's certainly possible, but it's not likely (in my experience working with CDS features). These sorts of alerts are pulling data from all sorts of different places, and there's generally at least some engineering effort involved in setting up new "categories" of alerts like this.


it doesn’t sound like it was a new category...just some new piece of logic that would trigger “prescribe opioids!” when the doctor typed something in

why would you think these sham alerts rely on actual data that was “new”? these were probably as simple as some if statement that said “if patient over X age and in pain and lives in Y state then show alert”


The sad thibg about this whole opioid epidemic is that people such as myself, who actually suffer from chronic pain can't get relief. Not that opioids would help me, wrong kind of pain. I suffer from nerve pain which requires a whole other class of drugs, but the docs dont want to prescribe anything anymore, even non opioids. It fucking sucks, degrades my quality of life and ability to carry out day to day tasks.


My guess is it's the "in pain" part that would require some engineering effort to expose as an option inside the CDS dashboard (assuming such a dashboard existed).

Obviously none of us know how this was implemented, and it's possible PF had a slick CDS design tool that was super powerful and made these sorts of things easy. That's never been my experience though...


Money. Most of the shady practices going on in the world today and a thousand years ago end up being about money. Even the software engineers who implemented this ghastly thing - they needed the job and the money the job pays so "OK, yeah, I'll do it".


>>I don't understand how someone gets asked to build this "feature" and they don't just quit on the spot...

Shall we go down the list of horrible and unethical things engineers have built over millennia, so we settle this once in for all? If you quit, someone else will do it


We don't know how many quit to get to the worker that implemented this. The thing is, it doesn't stop with the first person that quits, it goes forward with the first person that says "yes".


Didn’t they get bought for $100M just a year ago? I wonder what kind of holdback they had? This is a big payment unless AllScripts knew it was coming.


I was employed there during the acquisition. It was certainly expected and priced in. Management successfully managed to unload a hot potato while getting a nice carve out for themselves.



When is someone going to jail over this kind of things?


Since US invasion in Afghanistan, opium production had grown multiple times in Afghanistan. Of course, there will be never an investigation or any evidence dug up (just like there're no investigation of US war crimes), but I sense that these two events are tightly related and the root of all this is very very deep. So deep that noone ever will be ever dig up the truth.


We invaded Afghanistan only a few months after the Taliban made opium illegal to produce (due to some religious reasoning)...very suspicious when you look at our history of peddling opioids as a nation.


One of the research projects I am working on is an open source medical records system for patient owned medical records. I hadn’t considered deeply how the design of any EHR system will always include biases of some sort that will influence the behavior of patients and providers.

This is a sad event but definitely will be a good learning experience for those in the EHR design world.


Stuff gets deep quickly. Like movies like “The Hurt Locker”? Well say hello to Sackler family opioid money. It’s interesting to see how deep in pop society opioid money made it.


Are they going after the opioid companies, too?



>in exchange for implementing clinical decision support (CDS) alerts in its EHR software designed to increase prescriptions for their drug products.

in near future they will start to use AI and nobody would be able to tell where the specific opioid bias crawled in from, and even if the lineage of the biased decisions is successfully established it would be from some innocent facts/rules/data.


Are they going after the companies that were paying the kickbacks? They’re just as, if not more, culpable


There was a great series on opioid epidemic on Freakonomics podcast recently, if you are interested.


I wonder, and I am not trying to be a smart-ass here, if this is the first case of a growth hack (some sort of push notification/alert) that actually lead to someone's death.


>>"Practice Fusion’s conduct is abhorrent. During the height of the opioid crisis, the company took a million-dollar kickback to allow an opioid company to inject itself in the sacred doctor-patient relationship so that it could peddle even more of its highly addictive and dangerous opioids,” Christina Nolan, U.S. Attorney for the District of Vermont, said in a statement.

So where is jail time for the entire decision-making team?


I can’t help but laugh at the cofounder’s LinkedIn page: https://www.linkedin.com/in/matthewcdouglass

Very first line: I dig: socially responsible company-building


WOW. I almost sent him a DM... it won't accomplish anything.


Likely nothing will happen to them. It's a combination of several factors:

Courts removed DOJ's legal tools they used to go after individuals (Skilling).

DOJ started allowing companies to hire law firms to investigate themselves for wrongdoing because it was easier than doing an investigation. Also, it created a revolving door between the white collar legal community.

Culture change at DOJ after a pushback from said white collar legal community after flattening Arthur Anderson hilariously punctuated by the hiring in Obama's first term of Denis McInerney as the DOJ Chief of Fraud after he represented Arthur Anderson.

Several high profile losses: Skilling (again), Bear Sterns execs, AIG execs.

Everyone should just read Chickenshit Club.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: