Google’s Supreme Court case could disrupt the whole internet
Big Tech may now be legally responsible for recommendation algorithms
This week, the Supreme Court of the United States heard arguments in Gonzalez v. Google LLC, a case that could have far wider repercussions for speech on the internet than the ongoing release of the “Twitter Files”.
The facts of the case are tragic. In November 2015, Nohemi Gonzalez, an American exchange student, was eating at a bistro in Paris when she was shot and killed during a terrorist attack — allegedly carried out at the behest of ISIS — that also claimed the lives of 19 other people. Shortly afterwards, the Israeli legal nonprofit Shurat HaDin (“letter of the law” in Hebrew) approached Nohemi’s parents and asked if they wanted to sue Google for violating the US Anti-Terrorism Act because its YouTube site not only hosts ISIS content but, via its recommendation algorithm, automatically suggests ISIS and ISIS-related videos to users who have evinced an interest in that content.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
Lately Shurat HaDin, which has won billions of dollars in judgments for victims of terrorist acts, has been pursuing even richer prizes, unsuccessfully attempting to sue tech companies for hosting terrorist content. Now, despite losing earlier at the lower-court level, the case has finally reached the Supreme Court, where it stands a chance of prevailing in a potentially paradigm-shifting decision.
The question of law in the Gonzalez case is whether Section 230(c)(1) of the Communications Decency Act immunises interactive computer service providers when they make targeted recommendations of information provided by another content provider (in this case, ISIS videos hosted on YouTube). Google contends that it does, since that section was a specific grant of immunity for internet service providers in an act otherwise intended to regulate minors’ access to pornography. This has freed actual providers of internet access, as well as content-hosting and social media platforms like Facebook and YouTube, from the threat of frivolous litigation, allowing them to grow into major industries over the previous three decades.
Eric Schnapper, the lawyer representing the Gonzalez family, argued that this limitation of liability should only apply when a service provider is engaging in traditional editorial functions, such as explicitly deciding whether to display or withdraw content, in a manner consistent with its capabilities at the time of the act’s passage.
This case, like all others involving emergent technologies, poses a major challenge for the Supreme Court. Although the court no longer has any octogenarian members, only two of the nine sitting justices — recent appointees Amy Coney Barrett and Ketanji Brown Jackson — were born after 1970. When speaking during oral argument about the Supreme Court’s possible involvement in rolling back the immunity afforded by Section 230, Justice Elena Kagan, who is 62, remarked that “we really don’t know about these things”, joking that “these are not like the nine greatest experts on the internet” while echoing comments she made nine years ago about the justices’ inability to use e-mail.
Fortunately, the justices seemed sceptical of Schnapper’s attempt to articulate a repeatable, bright-line test for when companies like Google could be held liable for sorting and recommending third-party content. This is good, because a muddled or botched decision in favour of Gonzalez could greatly chill potentially useful content production. YouTube’s ability to introduce content via its algorithm can indeed seem insidious, but the alternative is far worse — an ever-growing litigation bill that may eventually force these providers to decide against hosting even slightly controversial material at all.
Congress has considered revising Section 230 in the past, but is yet to take any tangible steps towards doing so. If immunity for providers is withdrawn or modified at some point by legislators, one model for a potential contest-hosting platform that complies with this changed state of affairs would involve hosting all content without censorship whilst complying with all government requests to remove illegal content and track down the offender. All government communications that lead to content removal would be posted in full in a searchable archive. Such a platform would be extremely cheap to run — it could be essentially hands-off once there’s a single script to deploy — and also pass all legal liability on to content creators themselves.
In any case, the hosting of content on these platforms constitutes a quasi-public service of considerable social value, and burdening providers with additional costs will benefit plaintiffs’ lawyers at the expense of everyone else.
“…the hosting of content on these platforms constitutes a quasi-public service of considerable social value…”
A very debatable view. Do we actually think that the influence of the internet has been “of great social value”? Personally I think it may turn out to have been the single most damaging thing ever for both society and mankind.
shut up, eat the bugs
“ the hosting of content on these platforms constitutes a quasi-public service of considerable social value”. The author overlooks the fact that the service is provided with the intention of economic gain for the service provider. The algorithm is provided to maximise viewing with the aim of increasing advertising placement or eliciting a subscription. The service provider in the current market model is a publisher of freelance generated content in the same way as any news paper or broadcaster and the responsibilities and liabilities should be the same. As for the concern that this may lead to censorship of content and limit expression it is more likely to lead to a broader market place of service providers who understand and curate for a more focused audience breaking the quasi-monopoly of Google.
If you fallow Dr Malone – he says the entire internet as we know it – as formed by Google and Facebook et al – were CIA programs designed exactly to harvest data, and to control the agenda message. They have been, and are, more successful than anyone ever imagined.
DARPA The Defense Advanced Research Projects Agency – the USA Military and CIA owned Agency – super scary…..I always think of:
”Lavrentiy Beria, the most ruthless and longest-serving secret police chief in Joseph Stalin’s reign of terror in Russia and Eastern Europe, bragged that he could prove criminal conduct on anyone, even the innocent. “Show me the man and I’ll show you the crime” was Beria’s infamous boast. He served as deputy premier from 1941 until Stalin’s death in 1953, supervising the expansion of the gulags and other secret detention facilities for political prisoners. He became part of a post-Stalin, short-lived ruling troika until he was executed for treason after Nikita Khrushchev’s coup d’etat in 1953.”
This is where the spooks manipulate and monitor you – think of them as the sheep dogs, and you the sheep.
The Plandemic and the virus created and the vax and response was DARPA… listen to Malone on ‘Rumble’ as the deep state tool youtube has banned him
Darpa ””CommunitiesDARPA’s success depends on the vibrant ecosystem of innovation within which the Agency operates, and is fueled by partners in multiple sectors.
Universities Industry Small Business Government Public Media””
Need to avoid throwing the baby with the bathwater – if ISIS content is harmful, why is it hosted on Youtube in the first place?
Presumably to make money for someone, somewhere. The first comment in this thread answers your question. But I’m intrigued by the ‘if’ in your comment, which implies that you think YouTube actually takes down harmful material, or that ISIS actually publishes non-harmful material.
I’m pretty sure YouTube takes down harmful content – at least within the definition of its T&Cs. Given it hasn’t been taken down, there must be a case for it not being harmful.
There is an amusing irony to Youtube spending years bending over backwards to pander to interest groups like the ADL and then getting potentially sued into oblivion by the Israeli Law Centre, that claims to be “protecting the state of Israel and safeguarding Jewish rights worldwide”.
“one model for a potential contest-hosting platform that complies with this changed state of affairs would involve hosting all content without censorship whilst complying with all government requests to remove illegal content and track down the offender”
Surely that would only work if content hosts insisted on verifying the identity of content publishers?
That doesn’t seem a bad idea, provided there would still be the existing protections for legitimate journalists to protect their sources.
This will only work if you assume a benign government that only requests the removal of truly objectionable illegal content. I don’t think we have any of those, though some governments are vastly more tolerant of criticism than others. Can you keep your government from requesting that anything they do not like be removed via some sort of partnership between fascists in big tech and their ideological comrades in government? Fascists _love_ this sort of private/public cooperation, it is pretty much the definition of the breed. If you cannot restrain these authoritarians, then you are precisely in the situation where you want those critical of the government to be protected.
We don’t want government discretion in what should be censored. We want ‘everything is free to discuss except for this very short list’ which currently can be Child Pornography and ISIS recruitment videos. Adding more things requires a new law.
I am awaiting moderation, alas.
Join the discussion
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.Subscribe