>"In dismissing the innocent dissemination defense, which required that the publisher be a subordinate distributor who did not know or ought not to have known that the matter was defamatory, the Court stated that the defamatory nature of the content was self-evident from an examination of it."
(extracted from https://globalfreedomofexpression.columbia.edu/cases/duffy-v...)
I am no fan of Google, but this seems bizarre to me, since it seems to imply that some human has to read every web page linked by Google. Even to suggest that Google must respond to every complaint of content on third party websites just cannot possibly scale.
On the other, there is a party that is responsible for the defamatory content, namely whatever web app had collected and posted the comments. The claimant should have pursued them, not Google.
Just because it can’t scale, doesn’t mean it isn’t wrong.
There is no right to a scalable business model. If you can’t figure out how to do your business without doing it right, you shouldn’t be doing it.
Our local coal fired power plant has been yelling for decades that they can’t possibly make a profit and meet all the environmental obligations. Well… guess what.
> Just because it can’t scale, doesn’t mean it isn’t wrong.
That. I'm so tired of this excuse being used which basically amounts to complaining that you wouldn't been able to get so big and fat if you didn't break the rules.
That's true in principle, but in this case I think that the existence of search engines is a net social good and I think it would be a big loss for all of us if they disappeared. Not that the days of web rings and hand-curated website lists didn't have their own charm but I think that the net effect would be a huge increase in the power of walled gardens.
It's not just a fun quip: it's a moral principle that I really think more people in our industry should come around to.
What we do has consequences. Often times, profound consequences on vast numbers of lives. We have a responsibility, as individual contributors, managers, leaders, and "founders", for the outcomes of our work.
If Google is as smart as they'd like us all to believe, they can find a way to make their business work. Sure, the margins might not be quite as fantastic, but society doesn't owe them maximum return to its own detriment.
It's not my job to solve Google's scalability problems, that's their job. It's my job to hold them just as accountable as my local coal power plant for the choices they make. If Google wants me to love their brand, and support their work, then they should stop being a social and intellectual polluter. It's a lot easier to see and sense the danger of toxic fumes from a power plant than to see and sense the toxic danger of massive social media and tech companies, but they are no less real and no less lethal.
"It's not just a fun quip: it's a moral principle that I really think more people in our industry should come around to."
Which i actually buy into, and have lived for many years, but it also just seems a bit silly applied to this case.
The usual answer is "it's not my job", which you use here.
That's great - throwing rocks from the sidelines is real easy, but it's not clear exactly what you want to happen, so let's instead actually be clear and concrete about that.
So again, concretely: Is your suggestion that someone should review every single web page crawled by a search engine, Google or anyone else?
If not, can we move past the silly quips and try to get to a better place constructively?
If I put something defamatory on my website I can get sued.
Why should it be different for Google?
"We cannot have someone response to every complaint!" – okay, I understand. Then maybe don't do whatever you're doing at all then if you can't handle the responsibility?
Let's not change the subject - this isn't about responding to complaints - the suggestion here was basically that someone should have to review every single web page that gets indexed.
Either that's what we want or it isn't.
Let's not change the subject because this is a line that might have to be drawn.
Google's attempt at a defence implied that they should have checked the pages. There is plenty of internet related law to protect internet companies from responsibility for user submitted content. You can make tons of applications for tons of businesses without breaking the law, and that includes building a search engine.
In this case, Google was notified about the slanderous content on their platform. From that moment they knew, or reasonably could've known if a human actually dealt with their legal notices, that the content was breaking the law. They did not remove the material and the case was brought to court, where Google stated that they were a mere subordinate distributor left in the dark.
If they can't operate their product without dealing with legal complaints, then yes, they should hire more people or reduce the ways their search engine can break the law.
If there is no right for individuals to have a productive life with plenty of free time per day than a huge number of HN comments on topics such as politics, housing, social welfare systems, healthcare, transportation, etc... are obviated.
Which doesn't seem like an appealing proposition to accept.
There are specific laws indirectly governing work-life balance, such as maximum hours worked consecutively, certain contract stupilations being null and voice because of general law, regulations regarding sick pay, and protections against discrimination, but I don't think there are many countries that explicitly state the right to a productive life with free time.
Rights to housing, welfare, and healthcare are generally handled independently. I'm not sure if there is a "right to transportation". Most rights originate from a basic "right to a happy life" ideal, but are split up and spelt out in particular sub-rights that are easier to enforce in court.
Article 8 of the ECHR and similar human rights conventions seem to come close, but that's mostly used against governments and laws.
I don't think it's a bad idea to introduce such a right, but it needs to be carefully worded or it will cause a lot of trouble.
There are plenty of things you can do in life that are productive and provide you with leisure time that don’t also mean you can destroy the lives of others.
You might as well argue for slave labor. Because that’s certainly one way to be productive and have a lot of leisure time. But it’s wrong. None of this exists in a vacuum. Why should other people suffer so you can have free time?
The court can already do this if the ruling holds which is why we are discussing it in the first place. Your phrasing it as if it's up to individual members of society to decide to impose on each other. If you think this is relevant, can you describe how?
Yes, everyone wants a better life for themselves and they don't want to consider all the trade offs of consequences of that. In particular they often want to force their own ideals on someone else.
In the US, corporations have the same rights as people because they are collectively made up from people.
In the EU, corporations have fewer rights than the individuals that collectively make them up.
In other countries, different rules apply de jure or de facto.
People’s main complaint about corporations is that no one personally gets hit with a civil charge and civil charges never seem to change the behavior of corporations.
> because it was on notice that the material was defamatory and refused to remove the information, it could not be found to have innocently circulated the information
This is the second case she launched against Google. The first one determined that Google was publishing defamatory information, they settled, but continued to publish the information.
So this is probably a situation where they should have had a human looking over it.
"Looking at every page" doesn't scale, but "looking at every page we lost a court case over" should be doable, you would think.
> and, because it was on notice that the material was defamatory and refused to remove the information, it could not be found to have innocently circulated the information.
So in very least this is not just about linking to something, but about not removing defamatory content when being put on notice. The obligation to respond to requests and remove certain content from a search engine sounds a lot less unreasonable than merely being guilty of linking to something.
Irrespective of this specific ruling, the laws don't need to make Google's business scale or even be viable. Google should only exist if its business can comply with the rules we created for our society.
Often laws are drafted without considering business models that have yet to be thought of. When those businesses start operating it's common for laws to be changed to ensure they are properly regulated.
That's quite literally Uber's business model for large parts of the world.
It's rather sad that these kind of "businesses" aren't just banned and prosecuted as criminal conspiracies. I really think that's the appropriate classification for an organisation that goes in to a country, sets up a business it knows is illegal, stokes up violence, and reaps in profit (well, "profit", because it still doesn't actually make a profit).
Amid taxi strikes and riots in Paris, Kalanick ordered French executives to retaliate by encouraging Uber drivers to stage a counter-protest with mass civil disobedience.
Warned that doing so risked putting Uber drivers at risk of attacks from “extreme right thugs” who had infiltrated the taxi protests and were “spoiling for a fight”, Kalanick appeared to urge his team to press ahead regardless. “I think it’s worth it,” he said. “Violence guarantee[s] success. And these guys must be resisted, no? Agreed that right place and time must be thought out.”
The decision to send Uber drivers into potentially volatile protests, despite the risks, was consistent with what one senior former executive told the Guardian was a strategy of “weaponising” drivers, and exploiting violence against them to “keep the controversy burning”.
It was a playbook that, leaked emails suggest, was repeated in Italy, Belgium, Spain, Switzerland and the Netherlands.
This is not "pff, violence won't happen". That's a subjective assessment. It's "okay, violence could happen, that would be fantastic for us! Let's send our employees so that we can use that as an argueing point!" (always good to make the other guy look like a violent thug).
All in the context of Uber intentionally breaking the law (which is not my assessment, it's their own, and that of the French authorities).
What about this bit, though? Feels like they made their own bed to me.
"Google continued to publish the defamatory content in Australia for two years after it was found to be defamatory. In 2022, again self-represented, I endured another trial. Further details are on this page."
The amount of human workers to process the requests definitively must scale with the requests.
What may be desirable is cases where cost does not scale with revenue. But that should be no guarantee for long-term gains, because it provides room for competition that could make a cheaper offer at the same internal cost.
Just the use of "scaling" seemed a bit too narrow.
Produce x pieces and earn y. Produce 1000 times x pieces and earn 1000 times y. That's scaling in the traditional sense. "Everything" (+/-) scales.
Produce once at fixed cost and earn infinite - that's something beyond just scaling. Maybe leverage? If it would provide the same quality and lower prices it would be good for society. If prices stay high and there is an indirect cost in degraded quality for the sake of huge gains something might not be right.
Google has a takedown process and it failed. At that point, Safe Harbor no longer applies and we're in this category of "What standard would a newspaper be held accountable to?"
The view cached page link on google is a copy of a website stored on googles servers. IANAL but using automated software to blindly ingest everything under the sun and then redistribute it does not seem like an argument any court would accept for why they shouldn't be liable for what they're publishing. Also it seems like the issue in the article is that she specifically went through their take-down process and they still didn't remove it.
I am no fan of Google, but this seems bizarre to me, since it seems to imply that some human has to read every web page linked by Google. Even to suggest that Google must respond to every complaint of content on third party websites just cannot possibly scale.
On the other, there is a party that is responsible for the defamatory content, namely whatever web app had collected and posted the comments. The claimant should have pursued them, not Google.