“In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts.”
If true, that is damning, and would demonstrate once again that being an “ethics researcher” does not mean that you are are any more ethical than the average person. It just means you are more interested in the subject.
As a side note, I wish this field was more interested in meta-ethics than it is in forcing machines to abide by the personal ethics of the humans involved.
Leaking sensitive material is not unethical by definition, some situations might impose a moral imperative to act in this manner.
In the narrow ethical world of top down organizations, be they for profit corporations or the Army, it's of course a mortal sin, as is every other attempt to bring accountability to upper levels and redirect the flow of bullshit from top-down to bottom-up. I'll let you guess who decides what "ethics" stands for in this situation.
> Leaking sensitive material is not unethical by definition, some situations might impose a moral imperative to act in this manner.
Doesn't apply in this case. Snowden acted for the country, not for his dear friend who was fired in a scandal. Their supporters on Twitter raised hell and even made a cancel list of twitter users (AI researchers) who dared oppose their comments (Anima A.).
Had it been an independent or much less related third party I'd tend to agree. However, with Mitchell's relation to and defense of Gebru, I'm not terribly sure I'd call this ethical, unless one is completely able to suspend their own biases. Getting caught doing this is bad judgement either way.
A large part of them believe that a researcher has to be an activist, i.e. research=activism. Many recent textbooks on social science see the following as credible resarch:
"Emancipatory research: Research that exposes underlying ideologies in order to liberate those oppressed by them."
(Zina O'Leary textbook, used by many schools, over 2K citations for multiple editions)
So if a corporate behemoth such as Google wants to ethicswash itself by hiring individuals with the above approach, what outcome did it expect?
It was clearly a mistake. They don't act in a constructive way.
Let's take Timnit's paper for example. She found out that Google had been using a biased language model in search. The bias was like - 'male' association with 'doctor' as opposed to 'woman' association to 'nurse'. But she didn't show how this is harming anyone in a concrete way. Just theoretical. And then she offered no solution, just blaming the work of others, using her paper as a soap box to raise scandal and make herself holier than thou.
While I am not a fan of Gebru's handling of her own firing, this is a mischaracterization of her work.
She routinely offered suggested methods, experiments, and even new datasets [1] that fixed what she saw as wrong. She does have a fair bit of practical ML experience as shown in the computer vision papers in her publication list.
1. Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." Conference on fairness, accountability and transparency. 2018.
Of course I was aware of her Gender Shades paper. It's a small bias evaluation benchmark dataset.
> We developed the Pilot Parliaments Benchmark (PPB) to achieve better intersectional representation on the basis of genderand skin type. PPB consists of 1270 individuals from three African countries (Rwanda, Senegal, South Africa) and three European countries (Ice-land, Finland, Sweden) selected for gender parity in the national parliaments.
A dataset of 1270 images is hardly a breakthrough, the kind I expect to see in a small university project. But it doesn't lead to better models because it's not nearly large enough to train on. What it can do is rate existing models. Basically - useful to critique, not to improve.
A small nitpick: why just two races in a de-biasing dataset? Where are the Asians?
With the hard realism approach, a corporate entity would fund an independent body of research, and then re-use publications that match their agenda.
Hiring someone (perhaps with aspirations to buy their loyalty?) entails risks. Instead of pulling funding from the "independent think-tank", google now finds itself in the midst of a potential discrimination/political scandal involving an employee.
> But she didn’t show how this is harming anyone in a concrete way.
To roll with the example “man” ~ “doctor”, “woman” ~ “nurse”, the harm is having a giant and widely used search engine reinforce baseless gender biases, ie that there is no underlying reason why women should be nurses and men doctors. What is the harm you may ask? The harm may be subtle, eg being surprised when you find out your next doctor is a woman or your next is a man. It could suppress career choices and aspirations, and it could even be financial, eg reinforcing systemic pay gaps.
That particular "bias" isn't actually a bias, it's an accurate learning about the distribution of genders between jobs in the real world. Very few professions have an exactly 50:50 balance of men and women. Most are tilted towards one gender or the other. The purpose of a correct search engine is not to reduce my "surprise" at arbitrary events but rather to give me the information I'm looking for, which will more often than not be questions about the real world - not the fever dream of some hard-left activist.
Indeed. To intentionally skew the data such that, for example, men are over represented as nurses, is in fact introducing bias to the data based on prejudice.
You’ve essentially created a fictional data set because it’s biased due to the underlying prejudice (preconceived opinion that is not based on reason or actual experience) that men ought to be nursing more, despite that not being reality.
We’re in a strange situation where we have large concerted efforts by activists to inject fiction in to our facts (whatever the medium) with the aim of distorting perceptions in such a way as to some how correct what they perceive to be injustice in the real world.
This kind of hypothetical effects should be documented in concrete cases by a good ethical scientist instead of just described from imagination. For example, when someone searches for a doctor, let's say a male comes up first - who stops at the first Google result? They would probably need to go deep and read about the doctor's experience and find patient reviews.
Part of the issue is that the harms of gender bias (and other types of bias) should not need to be made explicit, but part of the research canon. Should a security researcher outline the harms of an attacker obtaining user credentials, or is our imagination sufficient because the harms are well known to us? And if you were looking for more in depth studies, then there is a ton of published research, maybe not all of it on arxiv or in machine learning journals.
At some point imagination has to make touch with reality otherwise it can become unhinged. Yes, security researchers can enumerate concrete cases where "the harms of an attacker obtaining user credentials" caused damage.
>A large part of them believe that a researcher has to be an activist, i.e. research=activism.
A large part of them believe in other delusions such as "silence is violence", coming to work on time is "Whiteness", and "objective, rational" thinking is normalized racism.
In their deranged minds, the ends justify their means, so they can be the antithesis of their being because their crusade is holy, and just.
That's the complete opposite of being a researcher.
I feel like your first two sentences are conflating the belief that research has to involve activism, and the belief that activist pursuits are worthy research.
Why is the aim of exposing ideology not worthy of being researched in your opinion?
Some concerns for social science involve the lack of replicability, publications thus becoming literature, with "credibility" often established by in-clique circular citations. Emotional and political coloring also come to mind. There is a position in social science that all research is political (Frankfurt school and those that oppose it, among others).
The belief that research has to involve activism has been demonstrated by the individuals under discussion. I cite that this is taught as a valid position in the field.
Political motivation of scientific activity is worthy of research. Both sides involved in the conflict at google may be described as demonstrating political, and apparently opposing, viewpoints.
True, they only support their own political faction under the guise of "identity politics", not all people.
For example Andrew Ng is considered "white adjacent" because he's the only non-white in an article about the history of AI. Asians are not favored with this group of activists.
I really have no problem with people being both subject-matter experts and activists. Chomsky is the prime example: world-class linguist and outspoken leftist intellectual. But the one mistake he never made was to try to established 'leftist linguistics'. Contemporary 'AI ethics' feels like leftist activism with a very superficial understanding of machine learning technologies. Versions of "ML researchers are white males, so their creations inherit their biases"[1] are objectively wrong. Biases exists, but they come from the human-annotated training data fed into ML systems and not the gender of the programmer.
I hope they are. You would not want a fire brigade that's only theoretical about putting fires out.
It seems that the occurrence of people who bemoan that social scientists are activists, that they are not supposed to actually develop solutions to what they study, has increased in recent years. It's bad logic. A sociologist studying the effects of poverty shouldn't be interested in solving poverty is a mind-boggling idea.
There's something more pressing, now. People can concur here. I've found that this line of thought, that social scientists shouldn't be activists, is an idea that's been drummed about by the so-called 'intellectual dark web'. This rag-time team of pundits propagated a lot of conspiracy theories about the 'cultural marxism', the 'Frankfurt school', etc. It feels like an attempt at policing the content of social research under the cover of conservative/christian propriety.
> people who bemoan that social scientists are activists
I'd prefer they label themselves as partizan ethicists or activists.
> , that they are not supposed to actually develop solutions to what they study
On the contrary, they should develop solutions, not just scandals. The problem is with activists who just want to criticize without contributing a solution. I suspect they are more interested in making a name for themselves and using ethics as a club.
Dismissing researchers as mere activists is a criticism made from a place of ignorance. Maybe you should learn a bit more about the topic before spouting off?
I read her paper, her Tweets, the press and almost all the conversations on this topic and that is the conclusion I ended up with. She's making a career out of trashing people who have made real contributions to the field, while she has generated mostly critique without any actionable insight, new breakthrough or solution.
Yes, a fire brigade needs to put out the fire. The analog to that in the social realm are non-profits, democratic policies, and grassroots activism. But fire brigades would be really bad at putting out fires if we hadn't studied them scientifically since about the 17th century. We would not know the difference between electrical fires, fires involving oil, and bush fires. Today, woke academics declare some social 'fires' to be bad, others to be necessary, and some to be underrepresented, instead of asking what causes them. I doubt this will lead to a coherent and ultimately actionable understanding of reality.
Fire brigades didn't stand by idly studying from the 17th century until now to act against fires.
Ethics is an old field, and also one that's been applied for a long time. It changed too. For the better even -- since social Darwinism was seen as ethical to some extent at the start of the 20th century.
If you hired a researcher to find out the most effective strategies to putting out fires you might actually want a purely theoretical researcher. Advocates for air drops might be blind to firewalls and vise verse.
There is also a reason to be cautious if a sociologist studying the effects of poverty were basing their suggestions on what would fix their own poverty.
Except social sciences aren't 'purely' theoretical. It's why it's sometimes called human science. There is a prominent human, and humane, part to it. It's a study of the human condition to some extent and it can't be separated from it.
As for your second point, that's true, but it's also why research is always open to criticism. Casting social scientists and ethicists in this case as activists feels like a political swipe.
It’s not necessarily unethical to do that if you believe you are uncovering or preventing the cover up of unethical behaviour. Ethics is more than following the rules. See whistleblower protections for where that sort of thing intersects with the law.
Well yeah, but at the same time, exfiltrating internal company data is still grounds for dismissal. It could be ethical and the Right Thing to Do, but you still crossed a boundary.
She's lost her job; if she's lucky that's the end of it. She can probably sue to get her job back or a compensation depending on whether a judge or jury rules that leaking the data served a higher purpose, but it's debatable.
I just don’t see much reason to believe that she has the kind of evidence that would justify calling her a whistleblower. I could be wrong, and if she releases emails of Sundar being racist or something I’ll be the first to admit I was wrong, but it seems a lot more like she was just angry over the Gebru firing.
I'm not sure how one excuses the other really (especially since the other is a corporation of 100k+ people). This tit-for-tat behaviour really spirals into destructive retaliations which are bad for everyone.
The benefit of what doubt? No matter how much you distrust Google, we can’t really presume they’re guilty of hypothetical accusations that haven’t actually been made.
Then again the activists have proven to be pretty dishonest themselves. Timnit lied about who fired her, hid the fact that she gave an ultimatum and has now dedicated time to publicly smear and attack everyone at Google, including listing people who should be fired (by name!) on Twitter. Not to mention the abusive behaviour she showed towards the FB head of AI on Twitter who stopped posting as a result. She never apologised, although she demands apology from Google coworkers.
The other activist was fired when she deployed political messaging code in production while hiding the whole process from her team and manager.
Do those strike as a people that will honestly present their story and would be good to work with? Ones that happily lie and fudge the truth to drive their agendas?
Because in my experience people who act like this, no matter what skin color they have, are corrosive and abusive to work with.
Google is doing something unethical, even if Google wasn’t doing something unethical before they decided to name and blame an employee on a personnel matter before completing an internal investigation to establish the facts. There’s a good reason why even under direct questioning with the most serious internal indications but an ongoing internal investigation, companies simply decline comment on personnel matters.
Of course, if they weren’t trying to poison the well about an imminent revelation of some greater unethical behavior on their part, they probably wouldn’t have engaged in the obvious unethical behavior. So...
Isn’t that a one sided expectation though? Timnit Gebru publicly tweeted about this second activist losing email access. By calling Google out in such a public manner, I feel like Google is forced to respond publicly, to set the record and prevent the early viral spread of these activists’ one sided take/misinformation. Otherwise what happens is journalists like Kara Swisher source entire stories from these activists, plaster it across their platforms, and Google then faces another manufactured outrage PR issue.
Edit: another comment here also claims that Google’s statement was made because Axios reached out to them regarding this story after Timnit Gebru’s tweet. So there you have it.
Yes, Google’s ethical responsibility in a current employer/employee relationship with Mitchell is different than Gebru’s ethical obligation to her former employer with whom she is already in a public, contentious battle.
And even if the ethical obligations were identical, Gebru’s violation toward Google wouldn’t excuse Google’s toward Mitchell.
If we accept Google's own claims, they have an automated indication which leads to suspicion of that and on ongoing investigation, not even something where they are prepared to claim an actual violation. i.e., exactly the circumstances where every half competent organization would decline comment (potentially citing “personnel matters” until they'd actually completed an investigation.)
So you say you suspect they have an ongoing investigation, and that would be "circumstances where every half competent organization would decline comment"
What substantiates this conclusion? They can have an investigation ongoing, and share the cause for said investigation. In the statement they explicitly establish that this doesn't imply guilt of the account owner.
There isn’t enough to tell based on the information provided. You are just viewing partial data with your biases. Haven’t we had enough of snap decisions based on personal biases?
okay granted. I think it's likely whistleblowing given what we have heard about the treatment of Gebru so far. However what it is definitely not is 'damning', and at this point I'm out of goodwill for Google to be honest. Less of a bias and more a healthier attitude towards big business.
Not every leak is a whistleblow. It depends on the content of the files. That verdict still needs to be made (depending on the content of the leaked files).
Sure, and, if false, its also damning — in both cases, of Google management.
Either:
(1) They have received an early indication which may indicate either unauthorized or legally protected activity, and are publicly naming and blaming and specific employee before completing an investigation, which is merely grossly unprofessional and unethical though probably not actually illegal, or
(2) They are lying and libeling a current employee.
The (1) case is a bit inaccurate/misleading. From what I can gather from the article:
- Gebru tweeted the name of the employee [1]
- Axios then reached out to Google, who then made the following statement:
> Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.
Sure, more to the point Gebru tweeted that Mitchell’s corporate email appeared to be nonfunctional, sure.
> Axios then reached out to Google
That seems likely to be the sequence of events, sure.
Usually and ethically, a company that was in exactly the circumstances Google described would have:
(1) Declined comment, or
(2) Confirmed the email was nonfunctional and declined further comment, or
(3) Explicitly declined comment on personnel matters (especially if the framing of the question from Axios raised the issue of it being a disciplinary action of some kind; raising a personnel issue when it wasn’t part of the framing of the question would itself be somewhat unusual.)
> Context matters.
As an abstract truism, sure; while the narrative you describe is exactly what seems like the most likely scenario to me, I didn’t describe it because that fact was already considered in the description of scenario #1. Google’s behavior is (even assuming that they are being completely honest) grossly unethical in the context described.
The only way I can read your argument is if I were in the spirit of "the big guy is always wrong".
If you're accused of a SECOND action taken against an ethicist, and AGAIN it's not because of anything you did that was bad, then yes your hand is actually being forced to say more than "no comment".
It's absolutely incredible how much Google has not spoken out to defend itself against the lies upon lies upon inconsistencies that Timnit has thrown out. And now another case pops up?
> The only way I can read your argument is if I were in the spirit of "the big guy is always wrong".
How about “the guy burning the person they are currently in a business relationship with without getting all the facts is wrong”. Or “a wrong by party A against party B does not justify a wrong by B against C.”
> It's absolutely incredible how much Google has not spoken out to defend itself against the lies upon lies upon inconsistencies that Timnit has thrown out. And now another case pops up?
Er, nothing in Google’s story, even taken as gospel truth, indicates that either Gebru’s fact claims in this case were lies or that her speculations were unreasonable or inconsitent in her position given the observable facts. So, your characterization seems...misplaced, at best, even if your characterization of her past actions was accurate. (And in the cases where Google has presented contrary stories to Gebru on other points, Google’s own stories have been outright self-contradictory whereas Gebru’s were at least internally consistent, so I can either trust Gebru or neither.)
> How about “the guy burning the person they are currently in a business relationship with without getting all the facts is wrong”. Or “a wrong by party A against party B does not justify a wrong by B against C.”
I don't even know what you think are the actions people have been taking, to make you interpret things in a way to make you say that.
> Er, nothing in Google’s story, even taken as gospel truth, indicates that either Gebru’s fact claims in this case were lies or that her speculations were unreasonable or inconsitent in her position given the observable facts.
It's very damning that she chose to call out Jeff Dean as being the person who fired her.
Her manager was not a man. Her manager's manager (who actually delivered the news) is not a man. The CEO is not white. She chose probably the ONLY person in her entire reporting chain who happens to be a white man (and in engineering circles famous), and she points to him and says "He! He did this!".
And that's just a start.
Face it, you don't even have to read Google's official account, much less believe it, without seeing that her story absolutely does not add up.
Even the headline does not match the content in any article about her.
The researcher who tweeted out the name of an employee who's email has been blocked, and throwing out theories about crackdowns and firings - that's all good.
Explaining the email account has been blocked due to mass-leaking documents - that's beyond excusable?
Sure. When facts contradict your opinion, you shouldn't hate the facts.
The researcher who's departure sparked a lot of controversy tweeted the name of the employee in question, with some opinion attached. Axios reached out to Google who then made the statement saying security systems detected mass-forwarding of internal documents. They didn't name the employee, but prefaced the explanation with "in this instance".
> Why do you think Google reached out to the press?
Why do you think I think that? I never said anything about who reached out to the press.
> I’d bet the employee leaking documents did
I bet they didn’t, because none of the articles include anything that the employee would have given them, only that they could not immediately be reached for comment. If that employee was the first source of information, then there would actually be some information from their perspective.
> and Google just responded to a request for comment.
The normal response (for a variety of good reasons, including legal and ethical ones) from a company on a matter like this when asked by the press when they haven’t completed an internal investigation would be to decline comment on personnel matters. Google’s behavior is grossly unethical here irrespective of who reached out to the media first.
> "publicly naming and blaming and specific employee"
They're not doing this if the press reaches out to them for comment with the specific details first. Given the bad faith actions of Gebru already (and apparently she's at least partially the source here from the other reply to your comment) it makes sense for Google to clarify with the response they gave (and in that response Google also did not mention the name of the employee).
This issue is way too heated for real productive discussion on internet forums. It's obviously tribal, flamewar bait, with a massive undercurrent of partisan politics (and perceived partisan politics on the side of the person you're disagreeing with).
I found what Gebru did earlier to be wrong, my prior is that the behavior here by the employee is also likely to be wrong. I've been generally disappointed in the level of discussion from the political fringes (both 'woke' and pretty much the entire GOP at this point). I find a lot of overlap between this Google AI ethicist community and critical race theory woke politics.
I suspect in the end when all the details are out Google will be in the right.
Without waiting there's no point for all the arguing that's going to go back and forth in these comments.
> They’re not doing this if the press reaches out to them for comment with the specific details first.
The press didn’t have the specific details until given them by Google, which is the “naming and blaming”.
The press had a report from another party of an apparently nonfunctional corporate email address. Describing the existence and basis of an internal investigation that had not been completed from that is grossly unprofessional; I’d be mildly surprised if it was done by a small outfit where media inquiries were regularly fielded by someone with no corporate PR, HR, or legal training, advice, or guidance, but for anything at Google’s scale it is unimaginable as anything but a conscious, deliberate breach of norms with the intent of harming the subject employee in public without developing a full picture of the facts, and that’s assuming Google’s statement is completely truthful as far as it goes.
> The normal response (for a variety of good reasons, including legal and ethical ones) from a company on a matter like this when asked by the press when they haven’t completed an internal investigation would be to decline comment on personnel matters. Google’s behavior is grossly unethical here irrespective of who reached out to the media first.
To give a counter-argument to this: If party A makes a claim or accusation, and party B just says "no comment for now", it'll be near-impossible to reduce the (potentially unjustified) fallout of the premature verdict made. Public opinions matter, and stories told one-sided should not be desired by anyone interested in an objective discourse.
I was disappointed to find out that AI ethicists at Google aren't LessWrong style AI ethicists. I've just read one Google AI ethics paper in it's entirety but it seemed pretty non-insightful and not actionable.
That said, if Google is "investigating" her because she was trying to find evidence of discrimination or bad treatment of Dr. Gebru, that seems borderline criminal behavior. (Assuming her attempts at trying to find evidence did not involve exfiltrating secret information).
> I was disappointed to find out that AI ethicists at Google aren't LessWrong style AI ethicists. I've just read one Google AI ethics paper in it's entirety but it seemed pretty non-insightful and not actionable.
This seems like a contradictory criticism to make. The Less Wrong folk, in my perception, mostly dabble in thought experiments (such as Roko’s basilisk) that aren’t terribly relevant to real world AI work. Timnit’s work on identifying racial and gendered inaccuracies in facial recognition [1] seems much more actionable.
Gebru's (and co-author's) work on racial error in facial recognition was done while they were academics at Stanford and MIT respectively and not while a Google AI ethicist to my knowledge. Roko's Basilisk is a banned comment from Less Wrong (though interesting) and doesn't really represent the totality of that site or the associated ideas.
I’m not sure which Google AI paper you’re referencing. The recent paper at the center of the controversy did have actionable suggestions regarding model training costs (financial and environmental), appropriate training sets, and generally incorporating domain expertise to avoid misinformation or mistranslations. [1]
That’s fair, I haven’t visited LW in a while. I did thoroughly enjoy HPMOR, but thought that most of Eliezer’s other work fell closer to speculation than practice.
As other commenters have mentioned, these AI algorithms are already being implemented in the real world with real consequences. Their associated ethical concerns therefore seem more urgent, and they can be acted upon.
Apparently not. Regarding energy usage, which has been discussed at length on HN and reddit, and is actively being researched in thousands of papers on efficient, scalable, edge-capable AI.
> “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,”
What? She should read previous work on the topic first - quantization, sparsification, distillation, fine-tuning pre-trained models, etc. I couldn't find her actionable ideas because she doesn't have any original ones. It's a hard topic which requires a complete rethinking of both algorithm and hardware. And she doesn't recognize that Google is investing in both to do that - TPUs on the hardware side, and improved algorithms such as the Reformer - a variant of the Transformer, reducing complexity from O(n^2) to O(n).
Then regarding dataset bias - I couldn't find their actionable ideas. I mean, except telling people to be careful about how they select training data, but nothing about how to replace the model they criticize with an unbiased model that works just as well in the general case.
It's easy to point out problems while not providing any solutions and making yourself the critic of those who are doing the hard work. See how she treated Yann LeCun on Twitter - she basically told him to go educate himself on her papers and fuck off because she doesn't have time for debating him, that after denouncing his tweets.
Yeah - the basilisk thing is mostly a joke as far as I can tell and has little to do with the AGI goal alignment friendly AI work or the control problem.
Miri is mostly focused on the AGI control problem and thinks that AGI is closer than others believe. If true, all other problems are pretty irrelevant.
There is real work in ML bias and fairness to be done too, but it seems overrun by toxic personalities and partisan politics.
I think sending it to external accounts probably triggered this, there is no way this would not have raised a red flag in any company. I mean, Alphabet is the same company where a person took self driving documents with him to start a new company.
Hopefully, if it was just normal conversations and after identities had been redacted, their access is restored. Still, analysing this information on their work machine itself and then sending out a summary seems like a better course of action than this.
Somewhat related - Levandowski (the crook that stole the self driving docs) was just pardoned by Trump. The pardon was supported by Thiel, Luckey, and other Founders Fund people. I find this pretty irritating, but I guess when you’re in the company of actual war criminals like Eddie Gallagher then the bar is pretty low.[0]
——
Separately, from the drama I’ve seen around the recent AI ethics stuff, I’d bet money Google is on the right side of this and the activists are not.
True, but it's only Google/Alphabet saying she sent data to external accounts and I don't think they're especially trustworthy. If she sent secret info to outsiders she deserves to be fired, but if Google is investigating her because they think she's looking for evidence of discrimination then Google deserves whatever civil and legal penalties apply.
Also, if you're into the LessWrong mindset and are susceptible to think that that which can be destroyed by the truth should be, you might find this tidbit interesting:
Another perspective on this drama to me is what the official expectation of a team labelled "Ethical AI" is/was. I doubt the only focus was to prevent discrimination by sex or race,
For example, there's also prevention of mass manipulation (AI learns human psychology, $BAD_ACTOR uses that to manipulates masses) and AI based profile generation of individuals with the goal of creating leverage for blackmail/extortion/whatever. Or, how would one form a controlling body of unethical usage in AI in such areas. There are quite a few aspects never mentioned in these discussions, and a search via google scholar turns up lots of the sex/gender/race papers, but none of the others (at least on the first few pages)-
Those things are bad, and are worth thinking about, but right now we have more immediate concerns. AI systems are currently being used for bail eligibility, likelihood of recurrence, mortgage approval, and hiring. These are things that literally change people's lives on an individual level, and the systems being used are currently indecipherable black boxes that return a result with no possibility of meaningful appeal.
There's a sense of starting small - redlining is illegal. No company wants to redline - it cuts into their profits and dings their compliance scores. So they're willing to work with us. Once we get that right, then, maybe we can start dealing with the cases where there aren't millions of dollars of funding assisting with making the tech more ethical -- and then, cases where it may actually be working against the money.
European law foresees those exact cases and IIRC prohibits automated treatment of personal information that leads to decisions that affects someone’s life, if no human takes the final decision and if the specific elements for the decision cannot be explained. In my opinion this forbids black boxes.
They have their proprietary algorithm to calculate a credit score, for example you almost cannot find an apartment without presenting your SCHUFA review and there is now way to know how their score is calculated (it has even been shown in the past that they use incorrect or outdated data for their calculation).
I think you’re referring to GDPR article 22, which prohibits decisions based solely on automated processing but imposes no constraints on the transparency of such processing. [1]
> AI systems are currently being used for bail eligibility, likelihood of recurrence, mortgage approval, and hiring.
Is there an analysis comparing the automated to manual approaches? I mean, people are biased, too. The models are more systematically biased, but can also be more systematically evaluated and de-biased. Which is better, the old or the imperfect new?
Yes; automated approaches amplify bias[0]. Moreover, automated approaches launder bias; in the general population, people view computers and algorithms as "unbiased", so imagine biased results to be more accurate than humans, because a computer did them.
Maybe the ethics research department should be paid by the state, and empowered with full access to the corporate databases. Google and co shouldn't be left to their devices.
> In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts.
I mean, come on. There’s no way someone in a leadership position would legitimately think that’s okay. I suppose to some degree we should reserve judgment until hearing her side of the story - maybe this was just unambiguous evidence destined for her lawyer and the EEOC - but it’s hard to avoid some pretty negative conclusions about the prevailing culture in this group.
Or instead of answering my question people could just downvote me. I guess.
Legitimately curious here. "Download all my mail from the mail server" is something I do multiple times every day. I guess my workflow would be verboten in corporate environments? Is it really such an odd behavior that they have an automated check for this, and it isn't throwing false positives every hour? Or is this one of those things they tell you at the "onboarding" like "hey, you might download all your personal gmail via IMAP regularly, but don't do that with your corporate email account or you'll get a phone call from security".
Maybe I'm just out of touch with how corporate email works. It's been a decade since I was an employee of a publicly traded company.
According to a source, Mitchell had been using automated scripts to look through her messages to find examples showing discriminatory treatment of Gebru before her account was locked.
Google's own statement, quoted by Axios, says:
Yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts.
You're taking those as if they say the researcher was just using her mail client the way everybody does, but that's not what they say at all. The first one ("using automated scripts to look through her messages") says nothing about downloading at all; the second, official one, says that "thousands of files" were "shared with multiple external accounts." The allegation is clearly that the researcher wrote scripts to select (an apparently large number of) specific messages and forwarded those messages to people outside the company.
Nobody is saying "OMG SHE USED IMAP!". They're saying "She sent internal corporate emails to people outside the corporation."
Typically this is disallowed from non company devices. If you're allowed to sync your corporate email to your personal devices, that's probably an oversight.
I worked at Google and we used gmail, not IMAP, to view email. Corporate laptops allowed logging in to your corporate gmail, but I never saw anyone use IMAP and I have a feeling it wouldn’t be allowed. Once I left Google I had no access to my old corporate emails and I’m sure that’s the way they want it. I did thought experiments about data exfiltration and it would certainly be difficult. They control the OS on your system, they control the network, and they authenticate everything so no anonymous access is allowed. That’s why people get caught - they can see basically everything you’re doing on corporate machines or corporate networks and they know who you are.
If true, that is damning, and would demonstrate once again that being an “ethics researcher” does not mean that you are are any more ethical than the average person. It just means you are more interested in the subject.
As a side note, I wish this field was more interested in meta-ethics than it is in forcing machines to abide by the personal ethics of the humans involved.