> What's surprising is how often I encounter fellow Americans mistakenly asserting the first amendment extends beyond government and into the private sector.
It's a confusion between the First Amendment and the principle of free expression, which is kinda forgivable given how closely they are related.
Not at all, they're directly at odds in this case. The President using presidential authority to restrict what twitter can say on their own platform is directly violating the first amendment to restrict twitter's free expression.
You can argue that twitter is also restricting others' free expression, but that is their right, while the government preventing twitter from doing so is a 1A violation.
It would be as simple as you write if social media companies would take responsibility for the publishing rights for what they publish, but they don't. For a long time they are picking and choosing the laws they want to abide by.
The fatal mistake which will cost the FAANGs billions is in becoming editors rather than facilitators by appending links to Twitter users posts. This doesn't scale (will all posts not edited/ appended to/rubber stamped by Twitter be considered 'true?) and also blows out the already stretched meaning of Section 230. This opens the door to much greater regulation of BigTech internationally.
'Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of "interactive computer service providers," including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish'.
Well... if you really want to get into it. Twitter can’t “fact check” the original statement. Because it was a prediction.
It hasn’t happened yet. It’s unlikely to be mass scale fraud, of course, but it’s a future event.
It’s his opinion / prediction. As wrong as it is, that isn’t something they should “fact check”.
Twitter picked a really bad tweet to make a stand on. Like it or not, they chose to editorialize someone else’s content, that’s a publisher.
They didn’t have to “break a law” to now be liable for other content they “publish”. That’s all the EO is, that Section 230 doesn’t apply to publishers.
> It’s his opinion / prediction. As wrong as it is, that isn’t something they should “fact check”.
"I believe that if we allow people to leave their homes, everyone will die of Covid-19". Narrator: "There is no evidence that everyone will die of Covid-19, in fact there is plenty of evidence that everyone will not die of Covid-19".
At which point you claim that the original statement, because it included opinion, could not be fact checked, even though presenting sources that counter the priors required for the claims to make sense is a completely reasonable thing to do.
I’m gonna nitpick, but they didn’t editorialize it. They posted their view in the form of a reply. Editorialization would mean changing what he wrote. Literally the root word of “editorialization” is edit.
Allowed to? Depends how you view publisher vs platform.
They are a private company and can do as they like... but if they’re going to “take ownership” of information on their service they are breaking the spirit of neutral carriers and Section 230.
This seems like an odd hill to die on. Surely they could have fact checked 100 different Trump tweets with actual misinformation and not just his concerns/prediction.
I am of the opinion they wanted to do this for awhile, planned it out poorly, and pulled the trigger at the wrong tweet.
> breaking the spirit of neutral carriers and Section 230
Section 230 has nothing to do with neutral carriers. Section 230 does not mention, imply, or otherwise suggest that there is such thing as a "carrier", much less that one need be "neutral".
Here's what it says:
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider
> No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
There's a bunch more, but it's all fluff or irrelevant. There are no relevant obligations (there is one obligation that the law has, it relates to kid-friendly modes in websites).
Note that there is no categorization as "publisher" or "platform". You are protected from certain kinds of content. The New York Times, who clearly publishes their own content, still has Section 230 protections for comments made in the comment box on their articles, because that content is not made by the NYT, but by another individual.
Under section 230, you can only lose protection on a specific piece of content if you are deemed to be the publisher of that specific piece of content. So the worst thing that can happen to twitter here is that they are determined to be the publisher of Trump's tweet for the purposes of things like libel and copyright lawsuits.
Given that, can you explain what spirit of the law is violated, and what spirit of "neutral carriers" (do you mean common carriers?) is related or to this?
Ok, that’s fair. Editorialize isn’t exactly right - except - it’s not just a reply. It’s altering the format to include a new entry that no one else could make. I suppose in my mind that’s how I would view an editor redlining something.
What exactly would be the public benefit of social media companies taking responsibility for user content? As far as I can see this framework does a fair job of making the same remedies available, just against the authors instead of the social media companies.
They will be able to silence those views they don't like and promote the ones they do. It will become readily apparent which side of the fence a social media company is on.
> It would be as simple as you write if social media companies would take responsibility for the publishing rights for what they publish
What do social media companies "publish"? In this case, the only thing twitter published was a link to information about mail-in voting[0]. That's it. They did so in the context of a tweet. So at worst, twitter would be liable for any illegal content in either President Trump's original tweet, or in the content I linked at [0]. That is what current US law says.
To change that would require an act of congress or a supreme court ruling. The court is unlikely to rule in favor of Trump[1], as the conservative justices favor businesses rights. So that leaves a new law/amendment to the existing law. That would need to pass the house, which seems unlikely as well.
Twitter is publishing everything any person tweets. Those 2 comments are nothing compared to the power of selectively blocking/publishing users tweets (which communication companies are not allowed to do), but at the same time not be responsible for copyright violations (which other types of media companies are).
Are you familiar with section 230? Which says that under current US law, twitter is not a publisher of the tweets it hosts. It would require an act of congress to change that.
Not to pile on but really do read it. I see how you get to the White House's position but it's a stretch. There don't seem to be any condions tacked on. They don't publish other people's tweats. I think they could maybe be held liable for illegally removing tweats. The protection afforded for filtering does seem dependent on the motivation but I'm not aware of any laws restricting what content they're allowed to remove and again in this case they didn't remove anything
I was working at Google in 2007, and the type of filtering we had was very different from what Google does right now.
We had automated filtering of word lists that took down sites, that were hate words / porn related for protecting children.
Right now I'm paying monthly for Youtube Premium, but I see that the people I'm watching have to be extremely careful to not say a swear word by chance, or even say the name of the COVID-19 virus, because they are scared of losing their revenue stream. I don't see this as fair, because Youtube got so popular _because_ it was allowed to publish anything without being responsible for copyright violations. It would be great for them to do fact checking as long as they are politically consistent.
In EU at least we have the GDPR that limits companies from using our data however they want to, but in the US at this point they need some kind of counterbalance.
Hum, also a YouTube premium subscriber. I think I see the demonization issue as somewhat separate. It's the result of negative news driving advertisers to fear their ads will be places adjacent to content they disagree with leading to negative publicity right? YouTube's options seemed to be either create tools for ad buyers to better manage the political palatability of the content their ads were placed next to or have the big spenders abandon them.
Regardless I wasn't comment on the morality of what big tech companies were doing only the legality. Nothing in https://www.law.cornell.edu/uscode/text/47/230 suggests to me that the protections are in anyway contingent on not removing certain content and certainty nothing suggests its contingent on not publishing content yourself in different contexts.
You can't separate demonetization from the question of publishing. Traditional media uses ads revenue for compensating content creators, but at the same time has a responsability for having copyright for every content they publish. Also just by the decision of demonetizing Alphabet gets farther from being just a communications medium earning money from helping the spread of information and getting to be a decider of what those users can communicate with eachother (Joe Rogan is a great recent example).
When a new law is being created, often it is created _because_ something legal, but immoral is being done by a person/company.
Also the law you refer to is a law inside the U.S., but Alphabet earns more than 50% of revenue (and most views) outside US. It was doing illegal business in the EU multiple times on grand scale and was given fines for it.
If I own Google and refuse to index and show in search results anything from Huffington Post, CNN, Washington Post is that okay? It is not! Technically, Google is a private entity and can do whatever it wants.
Once a company becomes too big like Twitter, Google, Facebook, they have a moral obligation to stay neutral.
Interesting how opinions expressing a viewpoint that differs from the hive mind gets down-voted.
I would argue that Google has an obligation to be transparent, they do not have an obligation to be neutral. As you state, they are a private entity and can do whatever they want.
If news providers and other knowledge providers are allowed to curate what data they present then I don't think it's reasonable to demand that Google be held to a higher standard. Further, literally nothing is stopping you from creating your own knowledge aggregator if you feel that Google is doing a bad job of displaying pertinent data.
I'm a big proponent of free speech, and have read a bit on the arguments against Big Tech censorship. One of the arguments against Google being able to selectively censor political content, despite being a private company, is that they could be classified as an essential service. I'm obviously getting information from sources opposed to Google's censorship, so I don't know if the wider legal community agrees with that view, but it's worth considering.
Another argument is that they have legal protections as content providers. However, the same protections don't apply to content publishers. If their censorship places them in the publisher category, they could open themselves to lawsuits. YouTube is an example that usually comes up. If a user uploads an illegal video, YouTube has protections against lawsuits. As a publisher, they would have more liability for the content they host.
Violating SEO rules is bad and these sites should be removed that can be understood.
Is it good for the democracy if Google removes all results from new sites it doesn't agree with? It should remain neutral as much as possible and not tamper with its search results.
> They are just using Trump's tweet to promote their own point of view.
Correction: they're using their own service to promote their own point of view.
I honestly think there are real issues with the public means of communication being privately owned by a smallish number of entities, but it does free expression no good to make the fight about letting lies, disinformation, and other forms of untruth to flow unimpeded.
It's a confusion between the First Amendment and the principle of free expression, which is kinda forgivable given how closely they are related.