You have a fundamental misunderstanding of what Section 230 of the CDA entails[1]:
> Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"
What Section 230 says is that all providers of interactive computer services are not liable for user content. That's it. It doesn't matter what the interactive computer service does, or publishes or doesn't publish. If you're a provider of an interactive computer service, you're shielded from liability for user uploaded content.
Thanks for the details. I'm curious though, couldn't traditional publications just say their articles are "user content" and absolve themselves of all legal liability, like defamation?
Assuming that content was paid for by the company, either by contractors or employees, I think a court could see through that being called 'user content' pretty easily.
If it's just uncompensated randos...well then it wouldn't be a traditional publication anymore.
Since the work was generated in the course of that employee's duty with the company, I'm not sure why you think the company would be absolved of liability for the content they themselves commissioned. That employee is the company in this capacity.
Yea, that makes sense. I'm wondering how it works with something like TikTok's Creator Fund [1] or Twitter Blue content creators (who might be compensated in the future), or even Medium.
I'm sure that if TikTok reimbursed a creator for content that was disparaging/illegal, the chain of suit would involve both the creator and TikTok. Barring complex indemnification agreements in whatever contracts were signed. S230 doesn't magically make companies "immune from everything" like a lot of people seem to think. It just correctly clarifies that person A posting on site B generated the content, and is who you should sue first if it's disparaging.
That's "how it works" with the laws in place today, there doesn't seem to be a deficiency.
This is likely a reason why YT and others demonetize controversial videos - for their own preservation, they don't want to fund the creation of content that might lure a suit.
I don't think anything's broken here. Definitely nothing that would be fixed by this Florida bill.
> S230 doesn't magically make companies "immune from everything" like a lot of people seem to think
However, some companies do think that Section 230 shields them from all liability for their interactive computer services, and lower-level courts seem to agree with them[1].
The case here[1][2] has made its way to the Supreme Court[2], though.
If the first amendment didn't exist the government could force a bakery to bake Nazi cakes. They can still force you to bake cakes based upon certain protected classes. So I guess if political affiliation became a protected class then maybe the government could force a bakery to bake a Nazi cake? I dunno.
> So if all the banks decided to stop doing business with all Democrats, you'd argue that was protected by the First Amendment?
Correct, it is. It would be foolish business, but they could do it. At least, that's my understanding.
What would more likely happen is this:
> Business announces no-Democrat policy
> Policy goes viral
> Business cancels policy due to public pressure and other businesses cutting ties due to the bad PR
> Republicans yell about 'cancel culture'
---
> So if you own a bakery and a Nazi insists you bake them a Nazi cake, you have to bake it I guess?
The first amendment literally works the opposite of this. That's exactly the kind of thing a business can refuse (though in some cases there are narrow carve-outs for certain types of identity).
You're being downvoted because of the misinformation in your post. Facebook isn't a publisher when it comes to user-made posts, so calling them one is misinformation.
> This isn't about Section 230. We need new legislation to reign in the unchecked ability of Silicon Valley to curb speech they disagree with.
Unchecked? What happened to the free market that the GOP constantly talked about, the sacred thing that the government shouldn't interfere with?
Not to mention there's no actual evidence of this:
> systematically denying their political enemies a platform.
What's happened so far is conservatives breaking the rules much more heavily than progressives, and then getting very upset when the consequences kick in (though to be sure, I've seen leftists getting mad about this too).
There is no "admission you're a publisher," it's a meaningless concept with respect to the laws that are on the books. The law is: hosts of user-generated content can remove that content at their discretion, and still not be considered the "speaker" of all the other content they do not remove (obviously, because that would be impossible to police at scale, and obviously because obviously Twitter did not make the post).
If you kick someone out of your restaurant for being loud and boisterous in the dining area, you are not suddenly responsible that someone was disparaging a public figure the next seat over and you chose not to kick them out.
> I'm not an expert on this by any means, but isn't this an admission that these companies act like publishers? Yet they have Section 230 protection?
Having some standards doesn't automatically make you a publisher. A site with user-uploaded videos that removed, say, all porn, would not make itself a publisher.
Now, obviously there's the question of, how many restrictions until you're a publisher? I'm not sure the law is clear on that.
> Now, obviously there's the question of, how many restrictions until you're a publisher? I'm not sure the law is clear on that.
As a provider for interactive computer services, you're afforded unlimited restrictions on the content you choose to serve. Section 230 of the CDA applies to all interactive computer services and their providers, no matter what.
From the EFF[1]:
> Section 230 says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). In other words, online intermediaries that host or republish speech are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do. The protected intermediaries include not only regular Internet Service Providers (ISPs), but also a range of "interactive computer service providers," including basically any online service that publishes third-party content. Though there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.
Thanks for this. Though I'm not gonna trust it 100%, even though I like what it says, since, y'know, the EFF is obviously more inclined towards that take.
I'm not an expert on this by any means, but isn't this an admission that these companies act like publishers? Yet they have Section 230 protection?
How does that work? If they're removing legal content, you should be able to sue Twitter for content they leave up, right?