Google Logo

Supreme Court Hears Google Case Brought By Family Of American Student Killed By ISIS In Paris

The Supreme Court heard oral arguments Tuesday in Gonzalez v. Google, which addresses the question of whether an “interactive computer service” like Google can be held liable for content recommended by its algorithms under Section 230 of the Communications Decency Act of 1996.
TFP File Photo

The Supreme Court heard oral arguments Tuesday in Gonzalez v. Google, which addresses the question of whether an “interactive computer service” like Google can be held liable for content recommended by its algorithms under Section 230 of the Communications Decency Act of 1996.

Section 230 protects internet companies from being held liable as the “publisher or speaker” of information provided by a third-party, simultaneously protecting their ability to restrict material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

The case, brought by the family of Nohemi Gonzalez, a 23-year-old American student killed in a 2015 ISIS terrorist attack at a Paris bistro, alleges that YouTube “aided and abetted” the attack by allowing targeted recommendations of ISIS videos designed to recruit and radicalize members.

In the news :Credit Card Companies Could Face Fines Up To $10,000 Tracking Florida Gun Sales

The lawyer representing Gonzalez, Eric Schnapper, created a distinction between recommending content and publishing it. Section 230, he argued, only grants immunity when companies fail to remove objectionable content. By recommending content, the company is taking action to encourage it, which the petitioners believe is outside the bounds of Section 230.

Justices pressed Schnapper on the scope of this interpretation, questioning how it would work when algorithms are so central to billions of searches made every day.

“You can’t present content without making choices,” Justice Kagan said. Roberts, Alito, and Kavanagh raised similar concerns on the potential for endless lawsuits based on algorithms prioritizing certain content, whether it is defamatory or otherwise objectionable.

This concern was at the heart of Google’s defense. The company’s lawyer, Lisa S. Blatt, said that “all publishing implies organization.”

Blatt noted that the internet would have never flourished in its early stages if websites had been faced with the threat of constant lawsuits based on how they arrange content.

In the news: GOP Rep Biggs Slams Biden Over ‘Full-On Proxy War’ With Russia

Previously, the Ninth Circuit applied to Google what lower courts have deemed the “neutral tools” test, which grants immunity when algorithms serve content using neutral rules based on user input.

Justices sought to clarify this test, seeking to find the point where a company would not be immune from liability for the content it recommends.

Schnapper argued that even a “neutral” algorithm could result in liability. He said it matters what the defendant does with the algorithm—in this instance, allowing it to recommend ISIS videos—not how the algorithm works.

Malcom L. Stewart, Deputy Solicitor General of the Department of Justice, who represented the position of the United States, said that a company must recommend content that “violates applicable law” in the state where a lawsuit is brought to be held liable. If an algorithm is inherently discriminatory and uses “illicit criteria,” like in the hypothetical scenario of a job website such as Indeed showing higher paying jobs to white applicants, it also would not be protected.

“When a platform prioritizes content, it is their own conduct [and] subject to liability,” Steward said.

Blatt, on the other hand, said an algorithm that promotes objectionable content is still protected, because the speech belongs to the original poster, not the website.

Justice Barrett asked whether an algorithm that promotes exclusively pro-ISIS content would be protected, and Blatt affirmed that it would.

In the news: Machete-Wielding Florida Man Arrested After He Threatened To “Kill Everyone”

The test, Blatt said, is to see where the “harm” is generated. As long as it is not the website’s own conduct creating harm—e.g., creating a dating algorithm that refuses to match white people with black people—the website is protected.

The court appeared hesitant to make any significant determination on Section 230, instead angling their questions towards understanding whether it would be more beneficial for Congress to address gaps in the law that have come to light as the internet has developed.

“We’re not the nine greatest experts on the internet,” Justice Kagan said, yielding a laugh from observers.

Justice Kavanaugh similarly raised concerns about the implications a court ruling could have on the economy and on the functioning of the internet, citing amicus briefs filed in favor of Google. “We’re not equipped to account for this,” he said.

Congress has not yet explored certain new issues related to Section 230, like artificial intelligence-generated content, Justice Gorsuch noted.

In the news: Florida Governor DeSantis Says Migrant Flights Lawsuit ‘Moot’

Justice Barret questioned whether it was even necessary to address the Section 230 question if the petitioner’s lose on the charge of “aiding and abetting” terrorism in tomorrow’s case, Twitter v. Taamneh, which weighs whether tech companies can be held responsible under the Antiterrorism Act for hosting ISIS content on their platform.

The Twitter v. Taamneh case was filed by the family of Jordanian citizen Nawras Alassaf, who was killed in a January 2017 ISIS attack in Istanbul, against Twitter, Facebook, and Google.

Android Users, Click Here To Download The Free Press App And Never Miss A Story. Follow Us On Facebook Here Or Twitter Here. Signup for our free newsletter by clicking here.

Login To Facebook To Comment
Share This: