Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I think this misses the parent’s point.

Just because you disagree with me, it doesn’t mean that I misunderstood the viewpoint I’m responding to.

> The point is to not let the algorithm make decisions.

And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system. If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.

As an analogy, it’s like a driver in a partially autonomous car. As soon as the automation takes over, the driver stops paying attention to the road. We can make a big fuss and production and talk about how it’s the driver’s fault, and the driver should pay attention, but we’ve placed them in a system where they are discouraged from paying attention, and the system is more dangerous as a consequence.

> Also guns don’t kill people, people do. Otherwise explain to me why it would be okay for certain institutions to be armed but not individuals. If guns are the problem, then no one should have them (including the military/police).

This is a false dilemma / false dichotomy. This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both, but there are obviously other ways to think about the problem.

Any rational way to look at problems will look at multiple contributing factors.



I do think you missed the point.

Parent: The software itself shouldn't have any control over the student's grades. A person should have to review the flags and actually find some wrongdoing. Not just push 'yes' and walk away.

You: Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?

Answer: A person should have to review the flags and actually find some wrongdoing.

You: It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong

Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.

> And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system.

Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.

> If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.

So...? What’s your solution? All you’re saying is that humans will remain bias, yeah they will. That’s why we have laws that punish discrimination and bias. If your company creates products (algorithms) that discriminate, you should be held responsible. The human element is not there to “work harder” but to assure that what you’re releasing works properly. If you don’t think increased accountability fixes the problem, please tell us what would be “enough”.

> This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both

No assumption. If you think a cop can have a gun but a criminal can’t then the gun isn’t the problem. If you believe cops can have guns but civilians can’t then the main factor is the person with the gun and not the gun itself. This isn’t an argument against increased restrictions and if you believe no one should have guns (including the government) im all for it. But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.


> I do think you missed the point.

Please make an effort to engage with the comments I make, rather than making guesses about my mental state.

> Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.

The human bureaucracy doesn’t do that very well. The human bureaucracy is deeply flawed and has limited skills. We can assign blame to the human bureaucracy for its failings all we want, but if we want to effect change then it’s necessary to include a broader range of factors in out fault analysis.

In other words, “assigning blame” is a low-stakes political game, and “root-cause analysis” is what really matters.

This is like the 737 MAX failures. You can say that it’s the pilot’s responsibility to fly the plane correctly—but the fact is, pilots have a limited amount of skill and focus, and can’t overcome any arbitrary failing of technology. So we rightly attribute the problem to the design of the system, of which the human is only one component.

This grading software is like the 737 MAX—it’s software that, as part of a complete system including non-software components like humans, does a bad job and needs repair. The 737 MAX reports listed something like NINE different root causes.

I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.

> Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.

So you’re saying that there’s a problem with the software, and that we shouldn’t place all the blame on the college administrators? Isn’t that what I’m saying?

> But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.

I do believe that not everyone should have the right to own guns, but if you’re interested in arguing with me about it, I won’t engage. If the comparison doesn’t work for you, think of something less emotionally charged like the 737 MAX or the Tesla Autopilot—both are scenarios where we rightly cite the software / automation as a root cause in accidents.


> I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.

There are multiple intermediate causes, and all of them are the responsibility of the human bureaucracy—including, to the extent it contributes, the selection, use, and failure to correct bad software—and all of them stem from one root cause, to wit, that the bureaucracy faces insufficient consequences for it's failures and thus lacks motivation to do it's job well.

Now, were the analysis being performed on behalf of the bureaucracy because they had decided to do their job, rather than being part of a discussion outside of them, the causes which are intermediate from a global perspective would be root causes, sure. Context matters.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: