Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was disappointed to find out that AI ethicists at Google aren't LessWrong style AI ethicists. I've just read one Google AI ethics paper in it's entirety but it seemed pretty non-insightful and not actionable.

That said, if Google is "investigating" her because she was trying to find evidence of discrimination or bad treatment of Dr. Gebru, that seems borderline criminal behavior. (Assuming her attempts at trying to find evidence did not involve exfiltrating secret information).



> I was disappointed to find out that AI ethicists at Google aren't LessWrong style AI ethicists. I've just read one Google AI ethics paper in it's entirety but it seemed pretty non-insightful and not actionable.

This seems like a contradictory criticism to make. The Less Wrong folk, in my perception, mostly dabble in thought experiments (such as Roko’s basilisk) that aren’t terribly relevant to real world AI work. Timnit’s work on identifying racial and gendered inaccuracies in facial recognition [1] seems much more actionable.

[1] https://www.technologyreview.com/2020/06/12/1003482/amazon-s...


Gebru's (and co-author's) work on racial error in facial recognition was done while they were academics at Stanford and MIT respectively and not while a Google AI ethicist to my knowledge. Roko's Basilisk is a banned comment from Less Wrong (though interesting) and doesn't really represent the totality of that site or the associated ideas.

https://news.mit.edu/2018/study-finds-gender-skin-type-bias-...


I’m not sure which Google AI paper you’re referencing. The recent paper at the center of the controversy did have actionable suggestions regarding model training costs (financial and environmental), appropriate training sets, and generally incorporating domain expertise to avoid misinformation or mistranslations. [1]

That’s fair, I haven’t visited LW in a while. I did thoroughly enjoy HPMOR, but thought that most of Eliezer’s other work fell closer to speculation than practice.

As other commenters have mentioned, these AI algorithms are already being implemented in the real world with real consequences. Their associated ethical concerns therefore seem more urgent, and they can be acted upon.

[1] https://www.technologyreview.com/2020/12/04/1013294/google-a...


Apparently not. Regarding energy usage, which has been discussed at length on HN and reddit, and is actively being researched in thousands of papers on efficient, scalable, edge-capable AI.

> “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,”

What? She should read previous work on the topic first - quantization, sparsification, distillation, fine-tuning pre-trained models, etc. I couldn't find her actionable ideas because she doesn't have any original ones. It's a hard topic which requires a complete rethinking of both algorithm and hardware. And she doesn't recognize that Google is investing in both to do that - TPUs on the hardware side, and improved algorithms such as the Reformer - a variant of the Transformer, reducing complexity from O(n^2) to O(n).

Then regarding dataset bias - I couldn't find their actionable ideas. I mean, except telling people to be careful about how they select training data, but nothing about how to replace the model they criticize with an unbiased model that works just as well in the general case.

It's easy to point out problems while not providing any solutions and making yourself the critic of those who are doing the hard work. See how she treated Yann LeCun on Twitter - she basically told him to go educate himself on her papers and fuck off because she doesn't have time for debating him, that after denouncing his tweets.


Yeah - the basilisk thing is mostly a joke as far as I can tell and has little to do with the AGI goal alignment friendly AI work or the control problem.

Miri is mostly focused on the AGI control problem and thinks that AGI is closer than others believe. If true, all other problems are pretty irrelevant.

There is real work in ML bias and fairness to be done too, but it seems overrun by toxic personalities and partisan politics.


I think sending it to external accounts probably triggered this, there is no way this would not have raised a red flag in any company. I mean, Alphabet is the same company where a person took self driving documents with him to start a new company.

Hopefully, if it was just normal conversations and after identities had been redacted, their access is restored. Still, analysing this information on their work machine itself and then sending out a summary seems like a better course of action than this.


Somewhat related - Levandowski (the crook that stole the self driving docs) was just pardoned by Trump. The pardon was supported by Thiel, Luckey, and other Founders Fund people. I find this pretty irritating, but I guess when you’re in the company of actual war criminals like Eddie Gallagher then the bar is pretty low.[0]

——

Separately, from the drama I’ve seen around the recent AI ethics stuff, I’d bet money Google is on the right side of this and the activists are not.

[0]: https://news.ycombinator.com/item?id=25843040


> Levandowski (the crook that stole the self driving docs)

Why is your username 'foss user' when you're showing you are for intellectual property rights by calling this person a crook?


True, but it's only Google/Alphabet saying she sent data to external accounts and I don't think they're especially trustworthy. If she sent secret info to outsiders she deserves to be fired, but if Google is investigating her because they think she's looking for evidence of discrimination then Google deserves whatever civil and legal penalties apply.


https://www.technologyreview.com/2020/12/04/1013294/google-a... is a summary of the paper that started it, full of actionable points – and co-authored with Emily Bender, no less!

Also, if you're into the LessWrong mindset and are susceptible to think that that which can be destroyed by the truth should be, you might find this tidbit interesting:

> Buried in the recent trillion parameter language model paper is how the dataset to train it was created. Any page that contained one of these words was excluded: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and... Two sample banned words: "twink" and "sex" https://twitter.com/willie_agnew/status/1350551463718621184


I read your links and I have to say that I do not understand how the list of excluded words relates to the rest. And what’s the issue here.


Could you elaborate on this a little more? I'm not sure I'm connecting all the dots.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: