In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.

How the court rules could be a gamechanger for American law, society, and social media platforms that are some of the most valuable businesses in the world.

What the law says

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. In just 26 words, Section 230 draws a distinction between interactive computer service providers and other purveyors of information. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. And just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

At least that is the way the lower courts have uniformly interpreted Section 230. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. That is so, even though, at the same time, the law has an apparently contrary objective: It encourages social media companies to remove material that is obscene, lewd, excessively violent, harassing or otherwise objectionable.

The attack at the heart of the arguments

This week's cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims are suing Google, Twitter, Facebook and other social media companies under the federal Anti-Terrorism Act, which specifically allows civil damage claims for aiding and abetting terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

What's more, he argues, modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

"The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that," he says.

Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

"We believe that there's no place for extremist content on any of our products or platforms," she says, noting that Google has "heavily invested in human review" and "smart detection technology," to "make sure that happens."

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

The choice before the court

Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

"Congress had a really clear choice in its mind," he says. "Was the internet going to be like the broadcast media that were pretty highly regulated?" Or, was it going to be like "the town square or the printing press?" Congress, he says, "chose the town square and the printing press." But, he adds, that approach is now at risk: "The Supreme court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables."

There are many "strange bedfellows" among the tech company allies in this week's cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration's position this way: "It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230."

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial--those are two different things. What's more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

What makes this week's cases so remarkable is that the Supreme Court has never dealt with Section 230. The fact that the justices have agreed to hear the cases shows that they have concerns. Justice Clarence Thomas has been outspoken about his view that the law should be narrowly interpreted, meaning little protection for social media companies. Justice Samuel Alito has indicated he might agree with that. But the views of the other justices are something of a black box.

The cases are Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh.

Jordan Jackson contributed to this story

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Transcript

STEVE INSKEEP, HOST:

The Supreme Court hears arguments in two cases this week that challenge the laws for internet companies. Way back in the 1990s, Congress gave internet firms some immunity from being sued for things that people did on their platforms. This law came before the big explosion of social media in more recent years, but it has come to protect many companies from the kind of responsibility that a traditional publisher might face. This week's cases ask the court to eliminate some or all of those protections. Why? Here's NPR legal affairs correspondent Nina Totenberg.

(SOUNDBITE OF TV SHOW, "NBC NIGHTLY NEWS WITH LESTER HOLT")

LESTER HOLT: Major breaking news tonight - the city of Paris under attack. Over a hundred...

(SOUNDBITE OF TV SHOW, "THE SITUATION ROOM WITH WOLF BLITZER")

WOLF BLITZER: It's a very disturbing situation. And as heartbreaking as the numbers are...

NINA TOTENBERG, BYLINE: In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad. The first person in her large family to graduate from college, she was gunned down by terrorists who fired into a cafe full of diners. Since then, her family has pursued a case against YouTube, which is owned by Google. The suit alleges that the company, by recommending ISIS videos posted online, aided and abetted in Nohemi's death.

(SOUNDBITE OF ARCHIVED RECORDING)

BEATRIZ GONZALEZ: Nothing is going to give me back my daughter.

TOTENBERG: Nohemi's mother, Beatriz.

(SOUNDBITE OF ARCHIVED RECORDING)

GONZALEZ: Because if there is something that can be changed, a good thing can come out of this tragedy.

TOTENBERG: The change she and others want would be something of an earthquake in the law as it now stands. At the center of the case is Section 230 of the Communications Decency Act, passed by Congress in 1996, when internet platforms were just beginning. In just 26 words, Section 230 drew a distinction between interactive computer service providers and other purveyors of information, like mainstream media companies. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the way it treats the telephone, and just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

Bottom line - for more than 25 years, the lower courts have uniformly interpreted Section 230 to mean that social media companies are immune from being sued for civil damages over most material that appears on their platforms. At the same time, the law has an apparently contrary objective. It encourages social media companies to remove material that's obscene, lewd, excessively violent, harassing or otherwise objectionable. This week's cases attempt to thread that needle. The Gonzales family contends that Google, Twitter, Facebook and other social media companies aided and abetted violations of the Anti-Terrorism Act. They allege that the companies did more than simply provide platforms for communications. Rather, they contend that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Lawyer Eric Schnapper, who represents the families, notes that when Section 230 was enacted, the economic model for social media companies was to get subscribers. But today, the economic model is very different.

ERIC SCHNAPPER: Now most of the money is made by advertisements, and social media companies make more money the longer you are online.

TOTENBERG: One way to do that, he says, is by algorithms that recommend other related material to keep users online longer. What's more, he contends that social media executives knew the dangers of what they were doing. In 2016, he says they met with high government officials who told them of those dangers posed by ISIS videos and how they were used to recruit people for terrorist attacks.

SCHNAPPER: The attorney general, the director of the FBI, the director of national intelligence and the White House chief of staff - those government officials told them exactly that.

TOTENBERG: Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

HALIMAH DELAINE PRADO: We believe there's no place for extremist content on any of our products or platforms, and we've actually heavily invested in human review to actually make sure that that happened as well as using smart detection technology. Now, over the years, that has improved to allow us to quickly detect, review, remove content from known terrorists or designated terrorist organizations.

TOTENBERG: Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, any change in the law is something that should be done by Congress, not the courts. Daniel Weitzner, the director of MIT's Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

DANIEL WEITZNER: Congress had a really clear choice in its mind. Was the internet going to be like the broadcast media that were pretty highly regulated, or was the internet going to be like the town square or the printing press? And Congress chose the town square and the printing press. The Supreme Court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables.

TOTENBERG: Columbia law professor Timothy Wu summarizes the Biden administration's position this way.

TIMOTHY WU: It is one thing to be more passively presenting, even organizing, information. But when you cross the line into actively recommending content, you leave behind the protections of Section 230.

TOTENBERG: In short, hyperlinks grouping content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another. If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit and getting over the hurdle of showing enough evidence to justify a trial are two very different things. And the Supreme Court has in recent years made it much more difficult to jump that hurdle. The second case that the court hears this week deals with just that problem.

Nina Totenberg, NPR News, Washington. Transcript provided by NPR, Copyright NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate