Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The most interesting sequence of the decision to me:

>The Court reasoned that only once Google acquired knowledge of the paragraphs by reason of Dr Duffy’s notifications and failed to remove them within a reasonable time thereafter would the necessary mental element be present for Google to be a “secondary publisher”.

> ...

>The Court then turned to look at the notice given by Dr. Duffy, to ascertain whether it was sufficient to fix Google with the relevant mental element. The Court concluded that Dr Duffy’s communications with Google comprised adequate notification to them of the allegedly defamatory material, this was despite the fact that some of the URLs were incomplete in these communications. The Court also implied that a reasonable time for removal of content would be one month, which had not been met by Google.

>...

>The Court rejected Google’s defenses of innocent dissemination, qualified privilege, and justification (truth). In dismissing the innocent dissemination defense, which required that the publisher be a subordinate distributor who did not know or ought not to have known that the matter was defamatory, the Court stated that the defamatory nature of the content was self-evident from an examination of it.

Emphasis on the last line.

It's not like she had to go to Google and say, "here is a court judgment showing this content is defaming me" to have it removed. She simply had to notify Google that this "self-evident" defamatory content was being served by them.

That seems like a significant precedent to set. I don't really think it is fair to expect Google to determine which content is "self-evidently" defamatory (edit to add: amongst all the takedown requests they will receive)



I think it's a fair precedent. If Google feels the content is not defamatory they can take that position in court with consequences if they are wrong. The downside is this ends up like the DMCA where companies are likely to take down all content complained about because of the risk/reward tradeoffs but if we already do that for alleged copyright violations doing it for alleged defamation seems much more reasonable to me. They also don't have to remove all self-evidently defamatory content, just all self-evidently defamatory content they are notified they are serving.


The court case seemed to hinge largely on the fact that she TOLD them the info was there, that it was defamatory, and they chose to keep it up anyways. Had they taken it down, they would've been fine.

So this doesn't seem like much of a precedent at all: If you are told you have defamatory content, you need to remove it. They don't really have to change much (anything?) about their search engine to make this happen. Maybe a link for "need content removed?" or whatever.


This will continue the trend for more censorship in search.


Censorship is not and cannot be a blanket defense for companies to get away with not moderating their content, and for Google this information is content.

Google in particular left behind the excuse of "we just serve what's already available" when they started weighting results for political and commercial gain and labelling things as factual. They clearly do not act as a dumb pipe for information gathering.

IMI, as soon as they started exercising editorial privilege on data they serve, they became responsible for this kind of thing. Uncensored search results can only come from an engine that isn't censoring for their own benefit anyway.


How will this add to "censorship"?


Google and others will likely prefer to take a hardline stance on what is “self evidently” defamatory, and when you combine heavy handedness with automation, you will almost surely be censoring legal content.


Similar to that quote, it is self-evident. If you can be punished for stuff that you index (as a search engine), you will start to self-censor to avoid punishment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: