Ultimately this reads as straight censorship by Google. Or if not by Google, by Google employees, with the tacit approval of their higher ups.
They didn't like the viewpoint she expressed, didn't like the criticisms she raised, so they blocked her (well her and 7 co-author's) paper. When she said that was unacceptable and stood her ground, they badmouthed her and fired her.
You don't have to agree with the paper's criticisms (and it appears they were just part of a longer paper) to be concerned by viewpoint censorship. If the paper wasn't worthwhile or based on facts, then that would have come out in academic review, either in peer review in the paper, or in subsequent papers rebutting it, or pointing to subsequent changes. That's how academic inquiry works.
But if companies can silence ethics researchers who express concerns, whose job, as AI ethicists, is to express concerns, that fundamentally undermines academic inquiry into the topics at hand.
Man, people really do want to have their cake and eat it too. If you want to publish research freely join an academic research lab, if you value money join an industry lab. You can't have both of those things.
"We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications."
If a megafarm hires you to write papers for the Ethical Milk Production team, any modicum of social awareness will tell you that they don’t actually want you to write a paper about the ethics of animal products.
If you, a large corproation working in AI, hire a prominent and vocal AI ethicist, any modicum of awareness should tell you that they may actually have a sense of ethics.
True - though I’m guessing there are a lot more vocal ethicists who will tone it down for money, than there are large AI corps who are actually willing to be honest about AI ethics! Anyway, sounds like it was a bad fit all round for both parties and this was the only possible outcome long-term.
But if that megafarm says "We believe that AI should: 1. Be socially beneficial." then we can point to that when their behaviour is not consistent with it, no?
There are a lot of research resources - compute, tools, data - you can't access from academia. The action in AI is in industry, as AI needs data and industry has it.
This is utter nonsense. If you looked outside her twitter feed and at evidence, its clear she was fired because she was toxic. Her threat to leave was her own fault which allows google to move forward. Look at her interactions with respected people like lecun. She has issues with disagreeing with people. She may be qualified, she can have opinions, you can also be the best engineer. If you are an asshole when it comes to disagreeing, then no one will want to work with you. Everyone is replaceable.
She is highly toxic, not just in general, but specifically toward Jeff Dean, who is her manager's manager. Actually, doing that against anyone is not okay.
Reading between the line, she is absolutely fired for her toxicity. This event is just a last straw.
They didn't like the viewpoint she expressed, didn't like the criticisms she raised, so they blocked her (well her and 7 co-author's) paper. When she said that was unacceptable and stood her ground, they badmouthed her and fired her.
You don't have to agree with the paper's criticisms (and it appears they were just part of a longer paper) to be concerned by viewpoint censorship. If the paper wasn't worthwhile or based on facts, then that would have come out in academic review, either in peer review in the paper, or in subsequent papers rebutting it, or pointing to subsequent changes. That's how academic inquiry works.
But if companies can silence ethics researchers who express concerns, whose job, as AI ethicists, is to express concerns, that fundamentally undermines academic inquiry into the topics at hand.