Did you come up with the list or do you actually know what that team actually did? I think only the latter provides a valuable entry point for discussion. In the case of the former, we end up fighting each others imagined scenarios - and imagination is limitless. It sure leads to a lot of discussion, none of it bound by reality though.
It would be nice, but would require an inside person to dare make potentially traceable (by their contents alone) public comments so it's unlikely, to know what the team actually tried to do, did do, and what it achieved. Without actual facts the discussion will just end up a free-for-all.
I think it's safe to assume that the team did the thing that it was assigned to do and that it was named for. It's certainly makes more sense to discuss that than to announce that we have no ability to discuss anything unless we were personally both on the team and managing the team, and were involved in making the decision to cut the team.
edit: I'm sorry, we do get to discuss how they were probably wrecker nutcases stifling people who actually build things in order to make up for their own inadequacies and inability to do the same. It's only assuming that the ethics team worked on ethics that is out of bounds.
An awful lot of people are making comments based on the assumption that such a team exists only to invent problems. It's worth at least one person interjecting that Facebook is causing at least some problems and that a team like this could have a place, even if nobody knows precisely what this team did.
Many people are taking it for granted that Facebook should have no interest in reducing harm. I'm glad somebody pushed back on that.
Funny that the "you don't actually know" critique comes in response not to the nakedly disparaging post that kicked this off, but the comment arguing for responsible software development.
We of course don't know enough of the specifics, because Facebook works hard to keep it that way. But we do know that Facebook has a body count. If you're looking for a "valuable entry point for discussion", maybe start with Facebook's well-documented harms.
And because I Ctrl+F-ed it and couldn’t find anything, one of those documented harms is the Rohingya Genocide. Putting this here so that we know what we’re talking about.
Seeing devs non-ironically complain about internal departments like this one which was set up in order not to let that happen again kind of saddens me a lot. No, productivity and work is not more important than making sure that that work doesn’t enable a genocide campaign in a specific corner of the world.
Yeah, when I say that Facebook has a body count, I'm not kidding. Facebook touches people's lives in so many ways that it's hard to even estimate the total number. But it seems the barest ethical minimum to say, "Hey, is there a way to be sure this next feature is responsible for zero deaths?"
> "Hey, is there a way to be sure this next feature is responsible for zero deaths?"
What makes you think this possible? I don't see where Facebook is particularly responsible here. Telephones and radios have been used to coordinate assassinations and genocides. Movies have been used to justify invasions. Why isn't anyone burning the effigies of Alexander Graham Bell, Guglielmo Marconi, and Thomas Edison?
Sorry, explain to me how Marconi directly profited from the Rwandan genocide?
In any case, perfection is rarely possible but often an excellent goal to aim for. For example, consider child car fatalities. We might not immediately get it to zero, but that is no reason to say, "Fuck it, they can always have more kids."
I think people fundamentally disagree here. I would attribute the entire (read 100%) responsibility for the Rohingya Genocide or other similar events to the perpetrators. Facebook is just another tool, and its creators bear no more blame for the actions of their users in the real world than the manufactures of the vehicles driven by the Burmese military.
Responsibility is not zero sum. The people who pulled the triggers? Responsible. The people who gave those people the orders? Responsible. People who told them where to find the people they killed? Responsible. Arms dealers who sold them the guns? If they had any inkling that this was a possible outcome, then they share in the responsibility. The people behind the legacy media efforts that whipped people into a frenzy? Responsible. And so on.
Facebook is not like a screwdriver, a simple, neutral tool that is occasionally used to harm. Facebook is an incredibly complex tool for connecting humans, a tool with all sorts of biases that shape the nature and velocity of social interaction.
People have known for decades that online communication systems enable harm. This was a well-understood fact long before Facebook existed. Facebook is morally responsible for that harm (as are the perps, etc, etc). Something they understand perfectly well because they do a lot to mitigate that harm while crowing about how socially responsible they are being.
You might disagree with most ethicists on this, as well as with Facebook itself. But you'll have an uphill struggle. Even the example you pick, vehicles, doesn't work, because car manufacturers have spent decades working to mitigate opportunities for the tools they create to cause harm. Now that cars are getting smarter, that harm reduction will include preventing drivers from running the cars into people.
Responsibility is zero sum, or any conversations about it are meaningless. This is quite handily illustrated by your comment actually - where does it end? How much of an "inkling" must people have to carry responsibility? It doesn't sound like a question of fact exactly.
The only way to objectively agree about responsibility is to use an actor/agent model, where actors are entirely responsible for their actions, and only actions which directly harm others are considered. Otherwise we're discussing which butterflies are responsible for which hurricanes. I'm happy to be wrong here, but I just don't see an alternative framework that can realistically, objectively, draw actionable boundaries around responsibility for action. This by the way is the model that is used in common law.
Facebook being a complex tool strengthens my point. Should providers of complex tools be responsible for every possible use of them? Is it not possible to provide a complex tool "as is" without warranty? Wouldn't constraining tool makers in this way be fundamentally harmful?
> online communication systems enable harm... Facebook is morally responsible for that harm
Everything can be seen to "enable harm". Facebook being morally responsible is not a statement of fact, it's an opinion. Facebook's actions to mitigate are a) to evade/delay regulatory action, b) to maintain their public image or c) by a small group of activist employees. Only a) and b) align with their fiduciary duty to shareholders.
> Responsibility is zero sum, or any conversations about it are meaningless.
That's incorrect. I'd suggest you start with Dekker's "Field Guide to 'Human Error", which looks at airplane safety. Airplanes are only as safe as they are because many different people and groups see themselves as responsible. Your supposedly "objective" model, if followed, would drastically increase deaths.
"The National Transportation Safety Board is an independent federal agency charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation — highway, marine, pipeline , and railroad."
The reason airplanes are safe is because the government is doing its job to regulate the space.
Prior to the establishment of NTSB in the Air Commerce Act of 1926, aviation accidents were common [1]. Congress determined that it was necessary to establish an investigation and regulation framework immediately at the beginning of the era of commercial aviation and this has been enormously successful. Many times Congress does not act fast enough to prevent harms (meat packing, banking, pharmaceuticals), but when they do get around to doing their job safety improves. Individual companies must be compelled to act subordinate to federal or state regulatory frameworks, and to not act as vigilantes.
I'm not saying the NTSB isn't important. But it's far from the only reason that we have fewer crashes. Government regulation can be helpful in improving standards, but they set a minimum bar. Aviation's high levels of current safety are a collaboration between many, many people. Starting with individual pilots, engineers, and maintenance techs, going up through collaborative relationships, through companies and civil society organizations, and up through national and international regulatory agencies. All of these people are taking responsibility for safety.
From your comments in this thread, I think that you believe that safety is a collective effort and that there is no single individual or entity directly responsible for enforcing a culture of accountability, is that right?
If so, how do you explain the catastrophic failures in construction, food safety, and banking prior to the top-down government oversight of those industries?
That is not in fact how I'd sum up my thoughts. "Enforcing a culture of accountability" is valuable, but neither necessary nor sufficient for safety depending on context. Food safety's an obvious example there. People still get sick from food. And plenty of restaurants and manufacturers would never cause illness if government oversight were to vanish.
Every aviation accident post mortem I've read assigns finds a finite list of causes and contributing factors. Each system/person has strong ownership that is responsible for making the recommended changes. These post mortem reports also explicitly do not find civil or criminal liability - that is a zero-sum process.
Liability is indeed strict zero sum. But you're confusing that with moral responsibility, which isn't.
They serve two different purposes. The former comes out of a zero-sum, adversarial setting where the goal is to figure out who pays for a past harm. The latter comes from a positive-sum collaborative setting where everybody is trying to improve future outcomes.
If I release a new product tomorrow, I'm responsible for what happens. As are the people who use it. But if somebody dies, then liability may be allocated only to one of us or to both in some proportion.
"Responsibility" is semantically a bit nebulous, but seems to me much more related to "liability" than "continuous improvement". The question "Who is responsible?" reads a lot more like "Who is liable?" than "How did this bad thing happen?". If you release a new product, you may be accountable to your org for how it performs, but (IMO) you're not morally responsible for the actions of others using it. If your new product is a choking hazard you're not guilty of manslaughter.
> If your new product is a choking hazard you're not guilty of manslaughter.
But you are still imho (morally) responsible for the deaths occurring out of the use of your product (this is where we would probably disagree). Even if you were not legally guilty.
I like another example that to me clarified the distinction between these concepts better.
Imagine one morning, you open your front door and find a baby having been placed there somewhen during the night. The child is still alive. You are not guilty in any way for the baby's fate, but now that you have seen it you are responsible for ensuring that it gets help. You would be guilty if you would allow it to freeze to death or similar.
> You would be guilty if you would allow it to freeze to death or similar.
This significantly varies by jurisdiction, and isn't settled at all. I don't think being present makes you responsible either. Unappealing as it may seem, you should indeed be able to pull up a chair, have some popcorn, and watch the baby freeze. People should only bear obligations they explicitly consented to. I don't think anyone has the moral authority to impose such an involuntary obligation on anyone else.
Modelling society as a constrained adversarial relationship between fundamentally opposed and competing groups is more accurate than assuming there is "one team" that knows a fundamental "good" or "right" and that the rest of us just need to "see it". People who perform honour killing or preside over slavery are just as sure of their moral superiority as you are. What we need is a world where we can coexist peacefully, not one where we are all united under one religion of moral correctness.
All communication systems enable harm, and more generally all systems that allow people to interact enable harm. In the US, the true responsibility for regulating harms lies with the duly elected government exercising its regulatory powers on behalf of the people. It does not lay with the unelected unaccountable members of Responsible Innovation Teams and Online Safety Teams. This form of tyranny persists because the majority of our representatives established their power bases before the advent of the Internet. Hopefully, in the next decade or two, we will be able to effectively subjugate and regulate the King Georges of the large social platforms.
There are laws put the onus on banks to proactively determine that their services aren't used to fund terrorism and multiple funky/opaque processes in banking specifically for that purpose. It kind of makes sense to me to have social media companies upheld to a similar standard that they are not used to organize terrorism or other war crimes.
I'm not sure I agree with these laws. They are very difficult to actually enforce to an objective standard. They also transfer the burden of law enforcement away from police departments and on to private organizations. What it translates to in practice is a bunch of (mostly irrelevant) mandatory training for employees, and an approval from an auditor who isn't very familiar with the business. I think police (and no one else) should do policing.
> What it translates to in practice is a bunch of (mostly irrelevant) mandatory training for employees, and an approval from an auditor who isn't very familiar with the business.
In the context of ensuring a bank doesn't transfer money to terrorists, this is completely wrong. Banks have a whole list of operations and processes, and failing this is enforced by actual jail time. This is why "know your customer" exists in banking. In the context of terrorism, there is no police enforcement regarding terrorists; often, we are talking military enforcement.
Yes, but my point is that "transferring money to someone" is not a true crime. It doesn't have a victim. And yes, our governments should use military/diplomatic channels to fight terrorism directly - that's what they're for.
> Yes, but my point is that "transferring money to someone" is not a true crime. It doesn't have a victim.
This is also incorrect. Transferring money to a terrorist organization is a crime because countries have declared it illegal. Of course well-funded terrorists have victims and enablement of well-funded terrorism has clear victims.
And yes, the government is using military and diplomatic channels to fight terrorism directly-- by ensuring the resources they have access to are limited.
There are two possible things you could mean by that.
One is that you think a lot of things just shouldn't be enforced, and that we should allow a lot more harm than we do now. Genocide?
The other is that you think we should have a lot more police to take over the harm-reducing regulatory actions now in place. That we should take the tens, maybe hundreds of thousands of social media moderators now working, but make them government employees and give them guns.
I don't see why people working in an office need guns, but yes, enforcement of laws should be done by... law enforcement. This isn't too controversial really, if I make a credible threat to someone online, it's a criminal matter for the police. Just as if I had sent it in the mail. The same should be true for all other types of crime (fraud, money laundering, etc.). Police should (and do) conduct investigations and arrest offenders.
Social media moderators exist to protect the public image of the platform, and enforce community guidelines. They should not be burdened with law enforcement simply because we can't be arsed to do proper police work.
Facebook could have chosen to be a completely neutral platform. They could have followed the ISP model, making Facebook another platform like email, RSS, or http. They just had to not make editorial decisions - leave the feed sorted by recency, and only remove illegal material. This is what safe harbor provisions assumed a company would do, allowing platforms who simply pass information between parties to avoid liability for that information.
But they wanted to be valued higher, so they explicitly chose to instead step into the editor's role and became responsible for the content on the platform.
Here’s a good example where engineers needed a team like this:
- should we name this device we want to put in every house after one of the most common American female names?!
engineers and ceo: I see no issue with that!
Several millions people named Alexa now have everyone from toddlers to their friends yelling their name ordering them to do stupid tasks and “Alex stop” repeatedly.
The name cratered in popularity for good reason.
Yet Amazon still has not renamed their dumb speaker
I think you're misplacing blame for this, I don't think the engineers are the root cause of this problem. Why don't these devices let people set their own custom codephrases? I suspect that wouldn't fly with management/marketing etc, who want to create a marketable brand out of the codeword. In fact I'm virtually certain that engineers at amazon weren't the ones who chose the name "Alexa" in the first place, that decision probably went through a dozen squared meetings and committees of marketers and PR people.
Alexa was only the 32nd most popular name in the US for girls. A little over 6k babies were named Alexa in the US prior to the speaker's launch.
The "Alexa stop" thing, is it a real or invented harm?
My name happens to match the lead character of Karate Kid and I constantly asked to do the crane pose when I was 7. Doesn't seem to have traumatized me.
> Alexa was only the 32nd most popular name in the US for girls.
32nd most popular is not exactly obscure. Why did they have to give the computer a human name in the first place? Probably because it helps people form some sort of parasocial relationship to the product, which is gross, but probably good for business.
I used to work there, after the launch (and therefore, of the name). One of the reasons given was that Alexa was a distinctly suitable word for proper recognition by the ASR model embedded on the device software.
The popularity of the name cratered after launch. That doesn’t signify anything to you?
That anecdote is nice but honestly it sounds like you survived a less frequent and more temporary somewhat similar but much milder version so now you’re stating everyone else needs to get over themselves?
For illustrative purposes: Dan stop! Dan stop! Dan set timer for 50 minutes. Timer set for 15 minutes. Dan stop! Dan cancel timer! Dan set timer for Fiifttyy minutes” Dan turn off kitchen light” Dan set thermostat to 68. Dan play Music.
Your name is now Kleenexified to mean robotic female servant, no harm!
It doesn’t seem like you googled or looked into how people named Alexa actually feel before pronouncing how they should feel.
This comment chain really shows why these responsibility vetting teams are needed, a lot of corporate workers are not empathetic or considerate beyond their immediate siloed task and assume everyone should react exactly the same as they did to only very tangentially similar experiences.
I'm pretty sure that a Product Manager made the final call on the name of the device. Some DS nerds might have given a list of names that could be used and presented some stats on the accuracy of the device recognizing the name, but the PM probably made the final call.
If I consult with you on how to kill my wife in a responsible way, I hope you'll tell me that there's no way for me to kill my wife in a responsible way.
My experience is that most people in engineering organizations are not sociopaths, but some are.
The problem is trying to get people that think everything you listed is obvious and boring to spend 40 hours a week staying out of the way 99% of the time, but also being politically savvy enough to get a few CxO’s fired each year.
Also, since the office is normally doing nothing (unless the office has already completely failed), the people in it need to do all of that, and continuously justify their department’s existence when their quarterly status updates range from “did nothing” to “cut revenue by 5% but potentially avoided congressional hearings 5 years from now” to the career-killing “tattled to the board and tried to get the CEO fired”.
If you know how to hire zero-drama, product- and profit-focused people that can effectively do that sort of thing, consider a job as an executive recruiter.
> [getting] the Facebook dating team’s ... to avoid including a filter that would let users target or exclude potential love interests of a particular race
you see, you have to just ignore those people in the feed, you can't filter them, its better and not racist that way. and who knows, you might become not racist if you see a pretty girl/boy you like, but actually that's probably just racist fetishizing
responsible innovation is doing the same dei doublespeak
That's exactly why I'm skeptical of whether a team like this could have been addressing real harms. If I heard about a nutrition team at Frito-Lay, I would assume they're working on nonsense until proven otherwise, because how could you meaningfully improve nutrition under the constraint that your company needs to sell lots of potato chips?
We have verifiable evidence of Facebook as a platform being used to instigate genocide[0] among other issues. Dismissing a concern that a platform could be used for harm against children as fallacious reasoning is a fallacy fallacy if you have no additional points to add to the discussion as to why you feel that is relevant.
I'm no fan of Facebook but I have a hard time understanding why Facebook is singled out for this. If what FB did is illegal, then they can be charged for their crimes.
However if we're critizing from purely a moral standpoint, how is this any different than claiming that cell phone carriers should be preventing this type of thing over phone calls or texts?
For the record, I don't find that to be a convincing argument either but it's the inconsistency of perspective that irks me.
The Rwandan genocide was spawned by radio propaganda from RTLM. Classifying social media as especially harmful to children when damage can be made from any sort of mass media is disingenuous.
- should we do this?
- who do we hurt by doing this?
- oh god people are hurting why are we still doing this?