Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mmmmhm, which means the humans now understand that they should be callous and cold. If they're not rubber stamping rejections all the time then the AI isn't doing anything useful by making a feed of easy-to-reject applications.

The system will become evil even if it has humans in it because they have been given no power to resist the incentives



> humans now understand that they should be callous and cold

Were humans working on health insurance claims previously known for being warm and tend to err on the side of the patient?


> Were humans working on health insurance claims previously known for being warm and tend to err on the side of the patient?

I know that in the continuously audited FEP space, human claims processors were at 95%+ accuracy (vs audited correct results).

Often with sub-2 min per claim processing times.

The irony is that GP's system is exactly how you would want this deployed into production. Fail safe, automate happy path, HITL on everything else.

With the net result that those people can spend longer looking at more difficult claims. (For the same cost)


All you have to do is take an initial cost hit where you have multiple support staff review a case as a calibration phase and generate cohorts of say 3 reviews where 2 have the desired denial rate and 1 doesn't. Determine the performance of each cohort by how much in agreement they are and then rotate out whose in training over time and you'll achieve a target denial rate.

There will always be people who "try to do their best" and actually read the case and decide accordingly. But you can drown them out with malleable people who come to understand if they deny 100 cases today then they're getting a cash bonus for alignment (with the other guy mashing deny 100 times).

Technology solves technological problems. It does not solve societal ones.


I am not disagreeing, and I am not arguing for AI.

I am just saying that the perverse incentives already exist and that in this case AI-assisted evaluation (which defers to a human when uncertain) is not going to make it any better, but it is not going to make it any worse.


Actually it may, even if only slightly. Because now as the GP says, the humans know the only cases they're going to get are the ones the AI suspects are not worthy. They will look more skeptically.

I totally agree that the injustices at play here are already long baked in and this is not the harbinger of doom, medical billing already sucks immense amounts of ass and this isn't changing it much? But it is changing it and worse, it's infusing the credibility of automation, even in a small way, into a system. "Our decisions are better because a computer made them" which doesn't deal at all with how we don't fully understand how these systems work or what their reasoning is for any particular claim.

Insofar as we must have profit-generating investment funds masquerading as healthcare providers, I don't think it's asking a ton that they be made to continue employing people to handle claims, and customer service for that matter. They're already some of the most profitable corporations on the planet, are costs really needing cutting here?


>"Our decisions are better because a computer made them"

This is the root of the problem, and it is (relatively) easy to solve: make any decision taken by the computer directly attributed to the CEO. Let them have some Skin in The Game, it should be more than enough to align the risk and the rewards.


The bot should have let ~5% of auto-accepted claims through to the humans. And then tracked their decision.


Actually the real issue for the humans was that it would mean possible reduction in employment which is why we had union block deployment for a time until a deal was brokered.

It helps, as you can suspect from "union" comment, that it wasn't an american health care insurance company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: