Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not sure I would call this "help". These workers were subjected to the most vile and graphic depictions of sexual abuse content imaginable for next to nothing.

This whole thing makes OpenAI seem evil if you ask me. Just another company exploiting people who are already being exploited. It's depressing.



It's kind of insulting to tell someone that you know better than they do what's good for them.

A Kenyan has told you that this job pays substantially better than average and that they wish more companies would make such jobs available, and your response is "no, you're wrong, this job pays too little". On what basis do you make that claim? What makes you better qualified than OP to comment on conditions in Kenya?


Just because a group of people is accustomed to poor working conditions doesn't make it any more right or ethical.

You could apply your logic to sweatshop scenarios where the people in those countries are just happy to have work, even if the work pays unfair wages, requires unreasonable hours, uses child labor, and provides no benefits to the workers. But hey... the disenfranchised are just happy to have a paycheck right?


It’s a necessary stepping stone on the path to better working conditions and wages. I think people forget what the early days of the Industrial Revolution looked like in our countries.

Can you get there without that? Likely not.

What you’re suggesting is to actually keep them poor for their own good. It’s a nonsensical and counterproductive argument that your making.


Not exactly. I'm suggesting that work like this be paid at a fair rate and mental health precautions are considered and taken seriously.

You can chop logic on this all day long if you like, but the point is, this work is terrible and damaging, and that's why we farm it out to countries like Kenya where the people there don't have a choice.


People in Kenya do have a choice, and they pick the best choice for themselves and for their families.

The set of choices for people earning $2/day is different than $20 or $200, but not smaller.

Your aggression is better targeted towards manufacturing jobs in parts of Asia, or Walmart workers in the rural US, than to this context.


The work is better than the options people have there, otherwise they wouldn’t do it. Don’t try to spin it as a negative thing for them. They don’t see it that way.


Kenyans are able to win this business because they can do this job at a competitive price. If Kenyans would require more, they would not get this (relatively good compared to their alternatives) job, it would go someplace else and Kenyans would lose out.


What would constitute a fair wage? The same as what we would have been paid in the US? Maybe India?

If we insist upon that, then why hire in Kenya, which is an unstable and unpredictable environment that has a lot of inherent risks for the company?

Or are you arguing these jobs shouldn't exist at all?


I'm not a labor wage specialist so your guess is as good as mine. Do you think $1.32/hr. is fair? Are you of the impression after reading the article that the worker's wellbeing was taken seriously and the pay was set at a fair amount considering the kind of work they were doing? I wasn't.


If you're not a wage specialist and my guess is as good as yours, why were you so quick to dismiss someone who actually lives in the country you're opining on?


It's clear you're one who enjoys a circular argument so I'll just leave it at this for you. I don't need to know the exact right amount of money these people should be paid to know that $1.32 per hour for looking at child porn and violent imagery is too little, especially without the requisite mental health resources available. If you are so sure this is a fair situation, maybe for your next job you'll accept minimum wage pay doing similar work.


It isn't pictures, it is text only. I think there is a huge difference. I had to police content for Twicsy (a search engine with 10 billion Twitter images indexed) and I have seen some very bad stuff. There is a huge difference.


> It isn't pictures, it is text only

For ChatGPT related, yes/maybe. But for other content filetering work, it's primarily video & pictures: https://www.wired.com/story/social-enterprise-technology-afr...


This is just false, if you read the article Sama also collected explicit, illegal images on behalf of OpenAI -- this was the reason the contract was cancelled.


If for the local market $1.32-$2/hour take-home is good compared to alternatives (of which I have no idea, but local claims seem to support that, listing comparable rates but for gross not take-home), then yes, it's fair, and it would harm the workers if we'd prohibit that, because they would have to take a worse local job.


> Just because a group of people is accustomed to poor working conditions doesn't make it any more right or ethical

I think this kind of moral puritanism is an enemy of real social progress. Maybe we shouldn't or can't expect some supreme, pure state of "ethical", maybe all we can or should expect is "better".


Fair point. But this is also the exact problem with allowing a small number of individuals to accrue massive wealth by arbitraging labor costs like this. It doesn't matter how much philanthropy Bill Gates engages in, he doesn't understand the needs of the poor better than they do.

Instead of having this elaborate, inefficient system of funnelling money to first world billionaires and then having them (maybe) send some of it back to the developing world, wouldn't it be better if these workers were just paid better in the first place?


In theory, yes, but at some wage level it wouldn't make sense to choose Kenya over India. At another threshold it wouldn't make sense to choose India over the US.

There is overhead to sending jobs to poor, unstable countries in timezones far from headquarters. If we insist that everyone, worldwide should be paid the same wages we are in the US, what incentive do companies have for not just hiring locally and avoiding those costs? If we step it back and say "okay, you should at least pay what you would in India", then why not just hire in India, which is a far more predictable environment than Kenya?


The job pays too little based on the fact that it can leave the "employees" traumatized and scarred for life.

But it does not take away from the fact that it can negatively affect the employees in an adverse way long term, and takes advantage of poor people to give them an objectively bad deal based on an information asymmetry where the person in question might not know that they have to read graphic descriptions of bestiality and pedophilia.


This is an argument that the job shouldn't exist at all. That's an okay argument to make (I personally lean that way), but has no relevance to whether $2/hour would be worth it to a Kenyan.


You don't see any logic in the difficulty of a job influencing the wage of that job?


So you think that it's ok to traumatize and scar workers for life, as long as you pay them enough?


I think people can make that choice as long as it's informed. The company should also be providing mental health support as part of the job. Someone has to do it. There aren't automated systems that can do a remotely decent job at moderation yet. I know you don't think it's okay for trained systems or social networks to distribute traumatizing material.


> traumatized and scarred for life.

Scarred for life by reading/labeling text? There is obviously a big difference between labelling pictures or video and text. I would be open to seeing an actually study, but my prior would be that text must be harmless. I would certainly be open to reading any text, especially if the context is that the text is for training purposes.


Sounds like you haven't read the article.


Correct. But then the headline is confusing.


Other content filtering work is primarily videos & pictures. Check out this article: https://www.wired.com/story/social-enterprise-technology-afr...

Sama/source in trouble yet again!


Some had images, also they did other work for the hiring company.


what's insulting is that you assume one kenyan speaks for all of them. even worse, that the random kenyan not being paid $2/hr by openai is qualified to speak on their behalf. on what basis do you accept the first statement. if another kenyan claims the opposite would be then become some vacillating organism between two (or more) positions?


I only assume that one Kenyan is more representative than one westerner. I'd happily invite other Kenyans to comment their own perspective!


Do you think that person speaks for all Kenyans? Clearly $2 an hour is too little but people are exploitable when they need money.


You're confusing being poor with being financially stressed. The two are not the same.


I already replied to this question: https://news.ycombinator.com/item?id=34427780

> I only assume that one Kenyan is more representative than one westerner. I'd happily invite other Kenyans to comment their own perspective!


You also have to realise most lot of people here are not sheltered. We've experienced post election violence first hand in 2008, tribes killing other tribes, we also have a culture of burning petty thieves alive. We experience violence first hand. I don't think violent text is going to affect anyone the way you think it is.


A lot of people aren't sheltered, but it doesn't mean they should be required to do traumatic work for little pay. Also, it was more than looking at violent text. Just because the people there already face hardships that may be greater than looking at graphic text and imagery doesn't mean the world should just pile on because their daily lives are already bad. That just makes a bad situation even worse.


They are not required to do traumatic work.


How is reading text traumatic work?


Sounds like you haven't read the article either. It was more than just reading text but the text was traumatic too:

"One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned."


Did he have to read the entire thing? Just skim it and mark it as obscene. I don't understand what the big deal is.


why not just | grep "sex"


> We experience violence first hand. I don't think violent text is going to affect anyone the way you think it is.

I'll be glad when this kind of "suck it up buttercup" bullshit is gone from our world.

Yeah, violence hardens people. Most often to the point where they're one light tap away from shattering. PTSD is a real thing, and just because folks in Africa aren't being diagnosed with it doesn't mean witnesses to this violence don't have it.

This kind of attitude - that since we've seen some shit we're immune to it all - is just a badly misplaced sense of pride.

And that misplaced pride just hurts people.


I'm not taking any pride in that statement. I'm just trying to highlight how because of the way our culture is, these sorts of texts are relatively tame. I'd actually like for us to get to a place where violence is not ingrained in our society.


> for next to nothing.

The point of the OP comment is that it's not "next to nothing". The cost of living differs from place to place and a small amount of USD can be worth a _lot_ in many developing nations.


> These workers were subjected to the most vile and graphic depictions of sexual abuse content imaginable for next to nothing.

Let’s imagine an alternative timeline where OpenAI pays $200/hr for this service.

At that price point is it still exploitive?

Is it ever appropriate or ethical to ask humans to voluntarily subject themselves to traumatic experiences in exchange for compensation?

I’m still processing my own thoughts on this topic so I’m curious to learn about other viewpoints.


A couple of years ago I seriously considered joining the federal police department in charge of dealing with internet crime here in Germany, so I've thought quite a bit about this topic.

Basically, it is a job that needs to be done in society, but one that is torturous, and can leave you with long-term or permanent problems. In essence though, it is not fundamentally different in the way a coal miner would jeopardize their physical health, just with mental health instead. This risk/possible damage should be rewarded with a higher wage, and adequate measures should be put in place to minimize the possible damage, eg. in the case of content moderation with access to therapy and only exposing employees to short intervals of traumatic content.

This is of course how things should be, in reality coal miners working environments only reached a decent level through unions and a long fight for better rights. Content moderators are not paid well anywhere either.


As a war veteran I can confidently say most veterans would probably not be traumatized by work like this. We've all seen much worse and the things that became problematic memories for me had little to do with reading/hearing/seeing the worst humanity has to offer in terms of violence and fucked up shit people do to each other. The stuff that really sticks and eats at you is usually stuff that happened to or close the individual or something that happened as the result of actions taken by the individual or their immediate group.

Probably the most important measure a company could take to prevent lasting harm with this kind of work would be to spread it around a whole lot more than just 36 people. The real risk of long term impact here would probably be with persistent exposure to it all day long. Most people can handle reading or seeing some graphic stuff with proper mental preparation for it but to see nothing but that day and day out would quickly wear you down.

>Is it ever appropriate or ethical to ask humans to voluntarily subject themselves to traumatic experiences in exchange for compensation?

Probably not. Yet people still voluntarily sign up for military service around the world by the millions, and they do so for a bunch of personal, family, idealistic, cultural, and societal reasons that are hard to reduce to a few easy to argue points like a lot of people online try to do with stuff like this.

Personally, I think it's admirable to hold the ideal that we should like to never offer jobs like this, we should also like to never offer jobs that involve going to war, cleaning up hazardous materials, dealing with explosives, working around heavy fast-moving machines, cleaning sewers, or a myriad of other terrible experiences either; but we're probably not in a position to make those better choices just yet. Until then, people are willing to do these things for a dozen different reasons per person, only two of which are the pay and support they get from the employer.


> At that price point is it still exploitive?

My belief: When you have a legitimate choice of your place of employment, and all of the opportunities will let you live your definition of a comfortable life, that's when it's no longer exploitative.

So many times - especially for poor folks - there is no meaningful choice. "I do this or I don't have a place to live." "I hold two jobs so I can feed my child."


> My belief: When you have a legitimate choice of your place of employment, and all of the opportunities will let you live your definition of a comfortable life, that's when it's no longer exploitative.

By that argument, paying 200$ an hour would be much more exploitative. This would be like landing a job paying $6 million a year in the US; it would be insane to quit such an opportunity, especially since every other opportunity is basically poverty (not even in comparison). Following this logic, it's 'graceful' to only pay 2$/h, since that makes them equal to the other opportunities and therefore not exploitative (while still paying reasonably well).

Effectively, it seems like you're calling OpenAI exploitative based on factors they can't change.


Huh?

> paying 200$ an hour would be much more exploitative

I didn't say that - I only said that it depends on choice. Does the employee have a choice if they have a $60k (average individual income in the US) option and a $6M option? Yes. Are they de-facto forced into taking the $6m one? No.

I know many people who didn't take higher paying jobs, or left such jobs, because they knew the high paying job was going to be miserable.

There was even an article about one such individual just the other day here on HN: Quitting the Rat Race

"I’m currently working at a top tier investment bank as a software engineer. I’m an insignificant cog in a machine that skims the cream from the milk. I’m earning the most money I’ve ever made and yet I’m the least fulfilled I’ve ever been."


Maybe I should have steelmanned your position a bit more. That being said, the grand³parent said:

> This whole thing makes OpenAI seem evil if you ask me. Just another company exploiting people who are already being exploited.

In that context, there is no way for OpenAI not to be evil, since they are (by definition) only one option in the market. In fact, taking your argument to the extreme, there is no way to offer jobs in Kenya as the first company to offer jobs would either be exploitative by paying minimum wage or exploitative by being the only real option. Going from that, paying a higher wage just worsens the situation, as it makes the alternatives even less feasible.

That being said, I do get where you are coming from. But it is not a good point to accuse OpenAI on, as they are making the situation better by offering options at a (for a Kenyan level seemingly reasonable) rate and they really don't have any other option[0].

[0] Except maybe paying Americans a lot of money for the job, but I find it morally hard to argue that they should pay US citizens a lot of money instead of paying Kenyans (comparatively) good money, even leaving aside economical feasibility.


If a huge portion of Kenya can survive off of less than $2/hr (either from local companies or OpenAI) how do they not have a legitimate choice?


I suspect the most vile and graphic content was confidently classified by the AI.

The difficult part, where you'd get the most out of human labeling of data, is the grey area where the models diverge or are uncertain.


Sorry but did you read the article? There were some clear examples in which the workers were subjected to this content.

"That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI"

"Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document."


It’s a job that is safer and pays better than the alternatives. Don’t go imposing your view of the world on others and thinking you know what’s best for other adults. You won’t like it if I come into your life and do that, even if I were right.

In this case is text. There’s no graphic depictions of anything. There could be foul, abusive, or racist language. But that’s much less difficult to deal with.


You sound like a few of the other commenters here who haven't read the article. Go and look at some of my other comments where I've quoted excerpts for others who couldn't be bothered to read before commenting.


I was with you on the first part, but text can and does contain graphic deceptions.


Graphics descriptions, yes, depictions no.


More than anything else these companies call themselves high tech. They act as if they are doing revolutionary things. But under the surface its just armies of cheap labour cleaning up ever growing mountain ranges of shit.


Compare this to being a developer in Canada and working for a US company. Coworkers in the US make more as a base salary and pay less taxes than Canadians do even though the actual cost of living is not different. The argument there is its not about the value of the work delivered, its the cost of competing in a market. Say your average dev for that position makes 140k USD in the US, and in Canada, the average dev for that position makes 100k CAD (74.5k USD), most will pay that dev a modest amount over 100k CAD to attract the talent and compete in the local market, say 120k CAD, which is just below 90k USD. Is it abuse to pay them less than others who do the same work in the US? Most who do the work in Canada are probably in agreement that it would be nice to make the same amount, because who doesn't want more money, but in reality, its still a good salary in comparison to most other jobs locally.

Its not exploitation if you're paying people higher than their local cost of living and higher than other local jobs. If you're just appalled about people being paid an amount of money that to YOU, based on your cost of living isn't fair because you'd want to make more, then be appalled at capitalism as a whole and how much work is being hired outside of company's originating countries. The production of almost everything globally is outsourced to locales where the cost to compete is lower than in the originating country.


Textbook example of socio-economic appropriation. That Karen should be identified ASAP and outed on Twitter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: