Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not "the machine". It's that people have genuinely variable ability to conceal dangerous items. You have to check the wheelchair of people who come in on a wheelchair, and you don't if they don't have one. That could be seen as unfair to the handicapped, but not checking the wheelchair isn't really an option.


I mean, it is an option.

The machine is one part of a hugely flawed system, and I think its totally valid to argue that we shouldn't be substituting a machine for, I dunno, actual human judgement.

The machine in this case doesn't seem to be demonstrably solving any problems, but it is disproportionately flagging black people, people with hair coverings ('turbans' in the article), etc. as suspicious, and that flagging is being used as part of a procedure to single those people out for further scrutiny. These systems don't seem to be solving any problems, and I'm not seeing a lot of defensible positions of why it should work that way, besides "eh, that's just how it is"


I don't read most of the comments here as defending the existence of the system, rather they're defending it from the accusation of deliberate malice. The system itself is deeply stupid security theatre, but was it deliberately designed to hassle black women or anyone else with dreadlocks?


Sure, and personally I'm not saying they literally set a bunch of engineers out to build a machine to explicitly target black people. I am saying that they probably didn't put the same level of testing there in that they did against disproportionately targeting white men in business suits when they were pitching investors and getting senators / the people who sign the checks to buy these things.

Or maybe they just didn't think about testing the machines against the diversity of actual people who go through an airport daily vs. who was available in the office at the time to test with.

Maybe they honestly believe that folks with natural black hairstyles, or who wear a 'turban' or ponytail, could be disproportionately hiding something suspicious up there, and we need a machine to tell us to check them out more often.

Hell, maybe they just didn't care at all who got flagged, as long as it didn't inconvenience them too much and they got paid at the end of the day. I mean they sure as shit don't mind the reality that this really does seem to be a lot of security theater with little benefit, besides the benefits to some peoples wallets.

Whether you want to define any of that as deliberate malice, willful ignorance, laziness, the banality of evil, or something else, is entirely up to your own values I suppose.


> was it deliberately designed to hassle black women or anyone else with dreadlocks?

Does it really matter when that's the end result and there is no intention or effort to fix it? Harm is harm, that the harm was intentional or not doesn't really matter when the harm is ongoing.

If your house is on fire, the first concern is to make it no be on fire, only after that does the question of intent have any relevance. And if somebody wilfully attempts to stop the process of making the house not be on fire, they should be considered intentionally harmful and no different from pyromaniacs until proven otherwise. Either way they should be removed post-haste.


How do you fix it? Program to ignore dense hair on black person and only alert is subject is white male?


Yes, clearly that is what everyone is saying. :-P

Maybe:

- Don't deploy a system that has this many false alerts (and that is what even the TSA calls them, they are not interested in finding dense hair)

- Stop continuing to use this stuff and spend money on it if we aren't seeing any benefit, and are seeing demonstrable harm.

- Don't replace human judgement with blind procedure that must be followed, when that procedure depends on a known flawed technology that isn't providing any demonstrable benefit.

- Accept that even if you want this kind of tech to exist, it isn't there yet, and you aren't going to "fix it in code" by adding a race variable.

Bottom line is, nobody is forcing us to use these machines / procedures. We aren't stuck with them and have to figure out some way to patch the code or we're all doomed, so we don't have to be so reductive about how we fix it. We could just say, this was a failed idea and throw it the hell out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: