The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.
You're essentially restating the concept of Common Carrier status[1], which is the inspiration for what Sec230 erected on the internet for content providers, so YouTube couldn't be charged with Material Support just because someone uploaded ISIS videos.
I'm not sure I understand what you're referring to there.
Edit: I mean that being prohibited from editorial interference seems like a pretty long distance from being essentially completely protected in editorial interference.
To be sure, it's not "completely protected," as we've seen with The Pirate Bay, etc., but Common Carrier says the phone company indeed can't refuse to rent you a phone number just because you want to run a sex line on it. That YouTube has protection against people posting videos of themselves speeding or doing drugs (it's always "vices," natch) was a necessary consideration in Sec230.
I guess there are arguments to be made about how far apart these might be conceptually, but what would the country/world look like without Safe Harbor? We may be about to find out.
I'm a huge supporter of the safe harbor and I hope different kinds of communications intermediaries will continue to be free of legal responsibility to know or control what people communicate through their services and platforms... I was just questioning the conceptual part in your observation that the common carrier regime was the inspiration for §230, which I don't see as straightforward because of this difference about content discrimination.
So in the existing §230 structure, intermediaries are immune for liability related to speech of their users. However, the intermediaries are still immune if they, for example, remove things they disagree with. That does mean that openly editorially biased sites can and do benefit from the immunity.
This still seems to be point-of-view neutrality on the government's part: an anti-fooist site that removes fooist comments gets the same immunity as a fooist site that removes anti-fooist comments (or a neutral site that removes neither). Is your view that it's wrong for the government to, in a sense, help the fooist site in the first place even though it's equally willing to help the anti-fooist site in the same way? Does that mean that there shouldn't ever be a subsidy for "newspapers" (open to any newspaper regardless of its editorial line or policies), but only for "neutral newspapers" (that don't editorialize)?
I don't think this is the same as a subsidy. It's about immunity from legal consequence.
If a website can curate what is said so that it is visibly filled with defamatory material about a specific target, the organization curating the material maintains complete immunity from what would otherwise destroy a newspaper or any other organization that is actually liable for what is printed in it.
I don't think this should be possible for any organization, regardless of who they are. This immunity should require an extreme impartiality on behalf of the website. It's meant to protect organizations from speech they don't control -- when they assert any control over that speech, it's now their speech.
This is complete nonsense its entirely impossible to run a large scale conversation without stepping in editorially to some degree. The impartiality criteria you are specifying is probably in fact impossible to define sufficiently in practice while doing so and the fact that people may in fact say nasty things on the internet is an acceptable cost to having a free global communication system.
A site like Twitter or Facebook that has millions of users contributing content can edit the feed to convey a particular message by only showing posts that fit. Just like an author who chooses particular quotes that fit an article, but on a larger scale, and the "article" is now made only of quotes.
IOW, the message can be crafted by the platform even if the words are provided by the users.
There is no article trying to reduce a many to many conversation to a simpler construct in order to justify regulating it is poor analysis.
Platforms may or may not be slanted people can pick a different platform if they don't like their current ones slant or make their own. People ought to have no right to have a platform to be neutral in what communication they facilitate between their users and any attempt to enforce that is inevitably going to devolve into being slanted towards those with the juice to hire the lawyers to suppress speech.
Whether you call it an article or something else, the feed as a whole, if chosen in a biased way, represents the point of view of the selector.
Just as an article made of quotes would.
And if the message in the feed/article is libelous, it's shameful (though perhaps legal) to hide beyond the argument "I was only quoting other people."
Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view.
I have a lot of sympathy for the idea that a neutral service should not automatically be responsible for the content it unknowingly transfers on behalf of a small minority of its users. This principle would support common carrier status for services like post delivery or phone networks.
But I also think we have to be careful not to go too far. The potential damage caused by an online distribution network in terms of sharing material that should not be shared is many orders of magnitude greater in terms of potential reach and rate of distribution than the analogous damage caused by a postal or phone service. We shouldn't assume that a reasonable balance between responsibility and safe harbour in one context is necessarily still a good balance in a different context.
Today's big IT businesses make staggering amounts of money from their online services. Some of those services, like YouTube, include significant amounts of illegal content, and sometimes that content was part of the main attraction that got the service established in the first place. I don't think it's unreasonable to suggest that the operators of such services shouldn't get a complete pass on facilitating substantial amounts of illegal activity just because policing their services more effectively or at least providing enough resources to respond quickly when they are actively informed of a problem would be inconvenient or cost them money.
The same goes for disruptive businesses like Airbnb and Uber. I have nothing against the disruption itself, where someone is using innovative business models that take advantage of modern technology to compete with established big players. However, I do have something against innovative business models that offer a certain advantage over incumbents only because they don't follow the same rules as everyone else. If some of those rules are no longer fit for purpose then they should be changed or removed appropriately, but then they should be changed or removed for everyone, too.
> The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.
1) Yeah that works great until someone uploads content about children being sexually abused and you can't take it down because taking it down is a non-neutral action (censorship).
2) Same, but non-consensual pornography of someone's ex.
3) Same, but advocating a clear and immediate desire to commit violence.
4) Same, but "fighting words" (a well known exemption when said to someone's face to free speech protections).
5) Same, but obscenity.
There are some very, very massive flaws of that nature with your position and I'd continue but I think you are getting the idea. Such things don't only impact the speaker and therefore create a situation where the provider should (in theory) censor them.
Similarly, such things have been ruled to be outside of the bounds of "free speech" in the US by the judicial system.
Liability is not the same thing as a court order. If you sue someone for libel and win, and the libel is hosted on YouTube, YouTube can't say "nope, Section 230" and keep hosting it. The court can order them to take it down. YouTube just doesn't owe you any damages, you have to take that up with the user.
> Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.
That would be the case if the government was deciding who could be indemnified based on content, but that isn't what's going on.
Consider what you would be doing to search engines. Their entire purpose is to sort content by relevance. There is no opinion-free way to do that, otherwise every search engine would have exactly the same results. You want to impose liability on Google and Bing because they index the whole internet and the internet has bad stuff on it?
> You want to impose liability on Google and Bing because they index the whole internet and the internet has bad stuff on it?
Yes. If you found some, i don't know, child pornography, say, lying in the street, then went around showing it to people saying, "hey, look what I found", that would be considered illegal/repugnant/stupid, right? Why should that same action analogized to the digital domain be any less illegal/repugnant/stupid? I don't think it is.
Analogies are often useful tools to explicate things by relating them to already understood concepts but utterly useless to make useful proofs or arguments because it oversimplifies to the point of uselessness and misses the ways things differ.
These arguments via analogy are so worthless, so without substance that it is usually a massive waste of time to try to explain to the originator the ways in which the analogy differs from reality so I will simply ask you to come back with an argument based on reality instead.
We aren't discussing one-time delivery but an ongoing availability as it is present on the website and would be delivered for weeks/months before a court order to take it down was received. This is more akin to a broadcast where someone picks the channel (url) than the example you provided.
That isn't the same thing as a sealed point-to-point non-public delivery of a message and to imply it is equal and equivalent is disingenuous.
There was at one point a tiny number of very expensive to run networks which could reasonably be supposed to bear the small cost of putting its money where its mouth was every time they showed someone on the tv.
Your concept would in fact basically either go laughably unenforced or destroy many to many communication on the internet as we know it as there would be no large channels for distributing any information of any sort as showing anything that couldn't be shown on nickalodeon would be an unacceptable risk in a world where most communication has few viewers and earns no/little money.
Once again I am at a loss to understand what is so bad about the way things are that this seems like a good solution.
I'm pretty sure you completely misunderstood what I said given I was replying to a chain of comments like this one:
> > The immunity must be limited to sites that are neutral, in the same way that non-political religious organizations are tax-exempt. Why? Because to claim that a site isn't responsible for a user's content is fine, until the site starts editing, censoring or weighting certain points of view. When the site loses its neutrality it ceases to be a conduit of content and instead becomes content. Indemnification from liability for a specific point-of-view is, I feel, an abridgment of free speech and I believe it is unconstitutional.
Note sure how an internet service provider could be responsible for "fighting words"--that's a very, very narrow doctrine essentially constrained to yelling something in someone's face that immediately gets you punched.
They lose their tax-exempt status if they become political actors. I can't recall the exact language now b/c I've been out of the world for a long time.