Moderated Content

The Supreme Court Takes up Section 230

Episode Summary

Earlier this month, the Supreme Court granted cert in two cases concerning the scope of platform liability for content on their services: Gonzalez v. Google, about whether platforms lose section 230 immunity when they recommend content to users, and Twitter v. Taamneh, about whether platforms can be found to have aided and abetted terrorism if they are found to have been insufficiently aggressive in removing terrorist content from their sites. The cert grants were a surprise, and the cases are complicated. Evelyn sat down with Daphne Keller, the podcast’s Supreme Court Correspondent, to dig into the details.

Episode Transcription

Daphne Keller:

Well, the way that the plaintiffs have articulated their claim, I see no way that, if they're right, that search engines continue to exist.

Evelyn Douek:

Welcome to Moderated Content from Stanford Law School, podcast content about content moderation, moderated by me, Evelyn Douek. As a reminder, the community standards of this podcast prohibit anything except the wonkiest conversations about the regulation, both public and private, of what you see, hear, and do online.

I'd like to start today by thanking the Supreme Court for making sure that over the next few years we're going to have lots of our podcast programming taken care of. On October 3, the Court granted cert and agreed to hear two cases on platform liability, Gonzalez v. Google, and Twitter v. Taamneh. On top of the challenges to the Texas and Florida social media laws that are headed the courts very soon too, the Court could be poised to change a lot of the law in this area and, as a result, what your internet looks like.

To talk through these two cases, I roped in Daphne Keller yet again. Daphne is the director of the Program on Platform Regulation at Stanford Cyber Policy Center and was formerly Associate General Counsel for Google.

Daphne, I'm basically going to have to call you Moderated Content Supreme Court Correspondent soon, I think, given how much I'm guessing we're going to be talking about the Supreme Court in the coming months.

Daphne Keller:

I'll do my best.

Evelyn Douek:

Excellent. Official title.

We're going to be talking about both of these new cases in detail, but the bottom line is that the first case is about whether a platform loses its Section 230 immunity for content that it recommends to users, and the second case is about whether hosting and recommending content can qualify as aiding and abetting terrorism.

Before we dig in, I want to take a step back really briefly for the listeners that have been vaguely aware of Section 230 before now but have justifiably had better things to do with their time, but are maybe more interested in it now that it's headed to the Supreme Court, can you just give us a broad overview of what Section 230 is and how it works in a nutshell?

Daphne Keller:

Sure. Section 230 is a big, important immunity for platforms for cases seeking to hold them liable for speech posted by their users. It doesn't immunize them for everything. For example, it doesn't immunize them for federal criminal claims, which is relevant for this case because there are federal criminal claims under the Anti-Terrorism Act. It doesn't immunize them for intellectual property claims, a handful of other things. But when it does apply, it is an extremely effective immunity for them. Effectively, the plaintiff is probably going to lose if it is a case about harm caused by speech that a user posted to the platform.

Evelyn Douek:

Okay. In a world currently where we have a lot of dissatisfaction about Section 230 on all sides, can you just explain a little bit about why we might want a rule like Section 230?

Daphne Keller:

Yeah. Despite the widespread dissatisfaction with Section 230, it is still doing a lot of important work. I would argue that it, in particular, is doing important work for the 99.9% of platforms we've never heard of who are benefiting from 230 and able to run preschool blogs or knitting forums or all kinds of other oddball little things on the internet that allow users to post content. They can do that because they don't have to worry that they'll be dragged into court because of something that a user said.

That possibility of having internet platforms or internet-hosted communications at all is really important. If we want to be able to go and use the soapbox that the internet has granted us to speak online, and to have that opportunity to speak be granted to more than just the small handful of elites who dominated traditional communications, then we need, if not exactly 230, then something like 230, to ensure that platforms are willing to go into the business of providing that service.

Evelyn Douek:

Section 230 itself is pretty short. Jeff Kosseff famously called it The 26 Words That Created the Internet. A lot of thinking in law around Section 230 is judicial interpretation after the fact. It would be useful, I think, before we dig into these cases that are going to be about how to interpret it now, or how the Supreme Court is going to interpret it, to discuss how it has been interpreted, and talk about maybe how broad or narrow it is, and when, currently, a platform will lose protection from liability, even notwithstanding 230.

Daphne Keller:

That sounds good. I feel like I should also clarify right now we're talking about the part of 230 that immunizes platforms for illegal content posted by users. There's this other part that is at issue in the other set of pending likely Supreme Court cases, the NetChoice cases, that's about a different part of the immunity, which is the immunity for undertaking to moderate and remove content.

Evelyn Douek:

Great. Yeah. We will come back at the end to talk about how maybe these cases will interact with those NetChoice cases that, as I mentioned, are also headed to the Supreme Court, which-

Daphne Keller:

It's head-spinning.

Evelyn Douek:

Good luck to us when we get to that part of the conversation.

Well, let's stick with the narrower part for now. How has Section 230 been interpreted so far?

Daphne Keller:

In recent years, courts have started narrowing it at the edges, mostly in ways that I think are pretty reasonable. For example, in the Lemmon v. Snap case, Snap had built a speed filter that caused your phone to detect how fast you were going and post that to Snap, and tragically, some teenagers using Snap and trying to show off their high speeds in the speed filter had a terrible wreck and died. The parents sued Snap, and Snap said, "No, we have a 230 immunity." The Court said, "This isn't a case where any user's speech caused the problem. This is a case where a feature that you built, the speed detector that posts the speed someone is going, that's what caused the problem."

I think that's not unreasonable. Finding areas where it really is something that the platform did, as distinct from something that the user did, that caused the problem makes perfect sense and is supported by the seminal case in this area, a Ninth Circuit case called Roommates, which laid the foundation for saying, if it's something that the platform did, then they don't have 230 immunity. In the Roommates case, what the platform did was affirmatively ask users to state preferences, in violation of federal and state housing law, ask them to state preferences about gender and sexual orientation and various things that are illegal for regular realtors to ask about and illegal as a basis for listing and renting houses generally.

Evelyn Douek:

Okay, so that's teeing up perfectly to talk then about Gonzalez because you've just drawn this distinction between whether the harm is coming from what a user has posted to the harm coming from what the platform has done. Finding that line is the central issue here.

Important to clarify that this is not a constitutional challenge to Section 230. It's about the interpretation of that. Section 230 is going to survive. It's just a question of, in what form? Let's start really generally. What is the question teed up for the Court in Gonzalez?

Daphne Keller:

That's actually a little bit hard to answer because I think the question that got teed up is massively complicated. But the way that the plaintiffs frame the question is it's asking whether platforms lose 230 immunity when the claim is based on the platform's own ranking and recommendation of particular user content, in this case, content posted by ISIS members or ISIS supporters.

Evelyn Douek:

I want to ground our discussion and talk specifically about the facts of these cases so that they're a little bit less abstract. I have to say, I was a little bit surprised that these were the vehicles that the Supreme Court chose for deciding these issues because both of the cases, Gonzalez and Taamneh, are about terrorist content, but the causal links here are extremely attenuated.

In Gonzalez, the plaintiffs are the relatives of a 23-year-old U.S. citizen who was murdered in the 2015 Paris terrorist attacks, but the complaint doesn't pinpoint any particular piece of content or feature of YouTube that led to the attack. It's also a case about a 2015 attack, as we said. While we can argue about whether YouTube has done enough since then, YouTube is a very different platform from what it was in 2015 and has become a lot more aggressive in its moderation. So there's that.

Then in Taamneh, it's the family of a victim of a 2017 terrorist attack in Istanbul which claims that Twitter, Google, and Facebook aided and abetted terrorism by allowing the Islamic State on their platforms, but there's no proof that the attacker had accounts on Twitter, Facebook, or YouTube. Again, they didn't identify any particular piece of content that the defendants knew about and failed to take down. It was about other ISIS adherents using those services and whether the platforms had taken sufficiently meaningful or aggressive action.

I'm curious for your thoughts on these fact sets, given that. Was it surprising to you that these were the vehicles that the Court chose?

Daphne Keller:

Very surprising. I don't even have a good theory about why they would choose such exceedingly convoluted cases. Maybe it's just that Justice Thomas had been champing at the bit for so long they finally felt they had to take something, and they didn't realize what a mess of a case they were taking. Maybe there's some role with the cert petition for the NetChoice cases, where some justice thought teeing up these issues at the same time would somehow lead to a better outcome. That's a little too galaxy brain for me to even game out exactly who would've thought that and why, but it's a weird choice.

It's a weird choice, I think, in part for the reason you mentioned, that it's very attenuated causation and that it's about facts that are very much part of history at this point in terms of what the platforms do to counter terrorism. There are also some problems or oddities in that the Terrorism Act here is very unusual, in being a law that is about supporting or conveying speech from particular people or particular groups.

In one of the previous cases that raised a very similar question, the Force v. Facebook case in the Second Circuit, the powerful dissent from the late Judge Katz, who did think that Facebook should have been liable in a claim based on ranking and amplifying extremist content, part of his opinion seemed to be saying the problem isn't about amplifying the content; it's about amplifying the groups. It's about recommending individual users to befriend or individual groups to join, which makes sense since a lot of the structure of the terrorism laws involves things like the State Department designating foreign terrorist organizations, and once that designation happens, that then you're on notice about what groups not to support, or in this context, arguably, what groups not to host or amplify on a platform.

That duty to suppress particular content based on who the speaker is, is very unusual. In the normal kinds of claims we would see under CDA 230, like a claim of defamation, the illegality is based on the content of what is being said rather than based on who the speaker is. It almost seems like we could get precedent under this case that doesn't really apply to most of the other 230 issues for that reason.

There are also unique issues about the speech rights of foreign speakers versus domestic speakers. Whatever the precedent is for international terrorism in this case, what does that tell us about domestic terrorism, where there's a whole different set of laws to look at? It seems like an odd case to prioritize.

Evelyn Douek:

Yeah, I just want to note it was Judge Katzmann in Force v. Facebook, but-

Daphne Keller:

Apologies.

Evelyn Douek:

No, of course.

I do want to come back to that question of the possible breadth of a finding of aiding and abetting here, and also the possible ramifications, because I do think there are interesting questions and open questions, and it would be good to get some more clarity on the level of liability that platforms have for terrorist content. I think, as you say, for all the reasons you say, this is a strange vehicle to choose for that, but I do want to come to that.

I want to stick with the amplification part here first because I think if you ask the average intelligent and well-informed observer of this kind of space, many of them might think that a narrower reading of Section 230, that it doesn't protect platforms for content that they recommend or otherwise amplify, for the reasons that we were talking about before, this distinction between what a user does and what a platform does, that might sound intuitively attractive. Now, you've written about this at length, about amplification and why it sounds really good, but it may be a little bit more complicated in practice. Can you give us a synopsis of your thinking around this?

Daphne Keller:

Yeah. My article on this, Amplification and Its Discontents, is about First Amendment issues, but I think a lot of the reasoning there carries over to the Section 230 policy issues. Basically, if people are familiar with the normal problems with platform content moderation and liability from a normal 230 case that's just about hosting, part of the issue is that platforms are bad at identifying unlawful content. We have scads and scads of research showing that when they are held responsible under law for removing unlawful content, they tend to overdo it and take down a whole bunch of lawful speech.

We have an increasing number of studies suggesting that the way they overdo it has disparate impact, that if they are deploying, for example, a hate speech filter, there's one study of an open-source hate speech filter finding that it disproportionately silenced speakers of African American English, saying that they were engaging in hate speech, as opposed to everybody else. There are concerns about speech suppression and disparate impact that are really concerns about user rights that come up with any content moderation law.

When we shift the discussion to ranking and recommendations, it just moves that set of problems from being a problem for the whole platform to being a problem for the most important and coveted real estate on the platform, namely, the newsfeed on Twitter or on Facebook or the recommendations feature on YouTube, which is an incredible driver of traffic to the people whose videos get recommended. It's a question of whether we want to accept that these problems of overenforcement by platforms become okay as long as people are only being excluded from the most important part of the platform, but still maybe can post in some corner of the platform where nobody goes and looks.

Evelyn Douek:

Right. I think a good example or a relevant example of that kind of overenforcement in this context, perhaps, is the tens of thousands of videos that YouTube took down of human rights abuses in Syria when wanting to moderate violent or extremist content. That one seems like maybe a good harbinger of what might happen in a world where there's extra liability for platforms for this kind of content.

Daphne Keller:

Absolutely. I think in the violent extremist context, particularly if the focus is on groups like ISIS or Hamas, I think we should be very concerned about overenforcement and disparate impact hitting users who are speaking Arabic or talking about Islam or talking about Syrian immigration policy. There's just a set of proxies for ethnic or religious identity that seem very likely to lead to the burden and the over-removal hitting one set of people a whole lot harder than it hits other people.

Evelyn Douek:

Right. Facebook released a human rights impact assessment that it did, I think it was just a couple of weeks ago, prompted by the oversight board, of content moderation in Israel and Palestine during conflict there. It found exactly that, that there was disparate impact on Palestinian and Arabic content. The assessors couldn't speculate as to why that might be. It might be resources, but also, something at play there might be concerns about Hamas and Hamas content, which is something that comes up again.

Daphne Keller:

They couldn't speculate about why that might be? Really?

Evelyn Douek:

Well, I think they have... There were limits to their speculation, I guess.

Daphne Keller:

Seriously, the Israeli government had been pressuring Facebook and other platforms tremendously to take down more speech that's terrorist under Israeli law. So had European governments. Theresa May, six or seven years ago, was vociferous in saying platforms need to do more. There's just been this massive amount of government pressure, and government pressure motivated by really serious fears of people dying, terrible security threats. But I think that pressure is absolutely the reason that we see this sudden uptick in platform enforcement efforts and the corollary sudden uptick in problems like the removal of the Syrian Archive videos and the over-removal of Palestinian content documented in that report.

Evelyn Douek:

That said, there are concerns about recommendation algorithms. We've had lots of stories, and the empirical evidence is still unclear as to the extent to which the radicalization thesis holds up. The idea that if someone is watching borderline content, the recommendation algorithms will feed them more and more of that kind of content. Zeynep Tufekci gives the idea of, if you're looking for vegetarian content, you'll end up watching a whole bunch of vegan proselytizing videos.

If we are thinking about a ruling where we might want to contain the damage that a court might do to platforms, the Court might think platforms should have more liability than they currently do for those intuitive reasons. There's the distinction between hosting content and actually shoving it in front of users. But we also don't want to implicate basically everything that the internet is, because almost everything, as you said, the newsfeed or the up next feed, or whatever it is, your Twitter feed... TikTok is basically a recommendation algorithm.

If we want to try and think of some limiting principles around how we might distinguish between recommendations, that question of what the user does and what the platforms do, is there any way to think about how we might draw that line? As you're watching this bubble up to the Court and you think maybe the Court does want to narrow 230, there's been some speculation that that's the case, is there anything that you think that they could do that wouldn't overdo it?

Daphne Keller:

I tend to think about this the same way I think about ordinary content liability for hosting, where the framework is basically, the more we think the platform is capable of identifying illegal content, the more reasonable it becomes to put some obligation on them. Then you add a factor of the more terrible and dangerous the content is, maybe the more we're willing to tolerate the risk of over-removal in order to get rid of particularly dangerous content. That's not the kind of thing that American judges are allowed to say out loud, but judges in the rest of the world can articulate balancing considerations like that, that sacrifice a little bit of speech at the margins.

I think part of the claim, if you unpack the claim about platform responsibility for ranked and amplified content, I think the most legally persuasive version of that claim is the ranking exercise shows that the platforms do have a greater ability to identify the unlawful content. The thing they are doing here shows that they could do a better job of taking content down. I think that's actually just not true.

Evelyn, you've done a lot of writing on GIFCT, the Global Internet Forum to Counter Terrorism, which I understand the kids today are calling GIFCT-

Evelyn Douek:

No.

Daphne Keller:

Yeah.

Evelyn Douek:

Ouch.

Daphne Keller:

We will resist, but-

Evelyn Douek:

I'm showing my age. I still call it GIFCT. There you go.

Daphne Keller:

This is the mechanism that platforms adopted to proactively search for and take down terrorist content, and it does not seem like it works all that well. Mostly, the problem is that nobody knows how well it works. It uses a database of hashes, or fingerprints, for images or videos.

There's also a list for URLs that have been identified by some platform that's part of the consortium as violating that platform's terms of service, and then all the other platforms can choose to use this to catch duplicates of that content and, in theory, to assess it against their own terms of service, or perhaps in practice, if they are busy and can't hire 30,000 moderators to do things like this, maybe they just go ahead and take that stuff down.

Part of the problem there is that at no point does somebody do the legal analysis to decide what's illegal content. It's nothing but platform-discretionary rules at every stage in the process, including the stage where things get contributed to the filtering database. If that's the best they can do when they're actively trying, I don't think there's something that comes out of the technology used for ranking that gives them any better information.

Actually, the last time I looked into this, and I think this is probably still true, so much of what goes into ranking isn't about understanding what the content of the video is. It's about personalization, understanding what pattern of users have liked things like this in the past and how much those other users resemble you and your history of selecting things on the platform. I think the recommendation is fueled much more by the pervasive behavioral tracking of users than by understanding something about what is in the video.

Evelyn Douek:

Yeah. It's worth saying, as well, that if the rule is tied to technical capacity or the type of content, you said judges are not great to be doing that kind of thing, for all of the expertise issues. It's also this is bubbling up to the Supreme Court now after decades, and it's something that moves really fast. It's also the speed of the justice system isn't really able to keep peace with the changes in technology. Given your experience at Google, I'm wondering if you could just say a few words about what liability for recommendations might mean for a search engine.

Daphne Keller:

Well, the way that the plaintiffs have articulated their claim, I see no way, if they're right, that search engines continue to exist. There's not good a-

Evelyn Douek:

Good thing we don't use them for anything.

Daphne Keller:

Right. A search engine is nothing but a mechanism for discriminating between content, prioritizing and recommending which content is most relevant.

Now, there are distinctions you can make if you want to save search engines while damning Facebook, or whatever division you want to make. One of those distinctions is between a push versus a pull. As a user, when I search for penguins on Google, I'm requesting some ranked recommendations. I'm pulling the content to myself. By contrast, you could say that things like YouTube recommendations or things like the ranking in a Twitter or a Facebook newsfeed are a push, the platform suggesting things to you that you never even asked for. I think it's possible to make that distinction.

Realistically, there's a continuum between those two things. What Facebook and Twitter are really trying to do at any moment is act like you just asked for something that will make you stay and keep looking and be happy. It's as if you'd submitted a query that said, "Show me something I'm interested in," and that's what they're responding to in the ranking.

Evelyn Douek:

Right. Then you have a situation where federal court judges are weighing in on if someone subscribes to a particular YouTube channel, and YouTube responds by giving them those videos, is that pushing or pulling? Or if a user uploads or downloads a bunch of content, is that pushing or pulling? Which, again, doesn't seem like the best way to handle this whole situation.

Thinking of unintended consequences, other than shutting down search engines, you've talked a bit about how people maybe don't realize what they're asking for when they are asking for these kinds of changes in liability. In particular, this is a bugbear of the conservative and the Republican Party at the moment, to get rid of Section 230 or significantly narrow it, in part because they're angry at platforms for doing too much content moderation. Just to ask, is this the most effective way of getting what you want? If you want platforms to do less content moderation, is significantly narrowing Section 230 a good idea?

Daphne Keller:

Yeah, so this gets back to who wanted to take this case and why, because if you are Justice Thomas and you are worried that platforms are suppressing conservative speech, maybe you're more interested in taking the NetChoice cases coming out of Texas and Florida that are an opportunity to speak to that issue.

A change in 230 immunity for illegal speech posted by users, the generic prediction of what that leads to is either platforms going way to one extreme and being extremely careful and taking down way too much content, which is what I described potentially happening in a newsfeed if the loss of immunity is just for the ranking in a newsfeed; or at the other extreme, to avoid liability, a platform might just throw up its hands and stop moderating altogether so that it can't be charged with editorial responsibility or knowledge, or whatever, about what's on the platform.

I don't think anyone actually likes either of those two extreme outcomes. I think most people listening to this podcast are probably internet users who like the fact that this podcast could go out without some lawyers, other than the two of us, reviewing it and like the fact-

Evelyn Douek:

Yeah, we absolutely do not accept any liability for anything that gets said on this podcast.

Daphne Keller:

Yeah, most people like being able to post content on the internet, and I think most people don't want to live in the sewer of platforms that don't moderate at all and allow all of the hate speech and barely-legal porn and beheading videos, just horrific stuff, to flow freely. I don't think we want a world where 230 goes away and those are the two options for newsfeeds.

Evelyn Douek:

Okay, so more speculating on who wanted this case and why, and why now. Let's talk a little bit about a procedural quirk here and speculating why the Court took this case now. Very often, the Court will only grant cert when there's something called a circuit split, which is where different federal circuit courts have come to different conclusions. Technically, that's not what has happened here. Maybe you could just talk a little bit about what's going on in the courts below that made the Supreme Court think this issue was ripe and that they should take the case now, even despite there's no technical circuit split.

Daphne Keller:

Yeah, it's very strange. First of all, a whole bunch of circuits have said that search engines are immunized by CDA 230. To the extent that ranking things might take you out of CDA 230, it would seem that all those cases go against that proposition. Then even for the circuits that have really closely looked at this question, which now are the Second Circuit and the Ninth Circuit, they both came to the same conclusion, which is that the platforms were immunized against claims about ranking of the sort brought in these cases.

The plaintiff's argument, which apparently worked, persuaded the Court, is like, "Well, there would have been a circuit split if the cases had arrived in a different order." That's because in the Ninth Circuit there was a previous case called Dyroff that rejected a claim like this, a claim that platforms' amplification or ranking mechanisms took away 230 immunity. Two of the justices in the current case, in Gonzalez, said if they weren't bound by the precedent of Dyroff, then they would have accepted plaintiff's claim and would have found that CDA 230 immunity does not cover ranking and amplification.

What the plaintiffs say is, "Well, it's just a coincidence that the first Ninth Circuit panel to look at this ruled the other way. If these cases had come in a different order, then our panel would've ruled in our favor, and that's effectively a circuit split," which is weird. That's not most people's definition of a circuit split, especially since the Ninth Circuit was asked to review this ruling en banc and declined to do so, but that is the way that they are trying to tee this up.

Evelyn Douek:

Okay, so in part, that is because the Court has been champing at the bit, or at least some justices have. You mentioned Justice Thomas has been eager to get something like this coming up. With the caveat of I don't want you to get into too... I don't want to put you on the spot with too much tea leaf reading. Can you talk about what we know or don't know about the justices and where they might be sitting on this? You've already said you have no idea. But what has Justice Thomas said, and do we know anything about anyone else?

Daphne Keller:

Well, the most interesting thing I think that we know about someone else is that Kagan was the solicitor general on the government side in a case called Holder v. Humanitarian Law Project, which was the last big case about the Anti-Terrorism Act and the First Amendment. There's this funny piece about a liberal justice potentially having a particular interest in shutting down or getting platforms to shut down speech that is dangerous in this particular way, in the way that relates to terrorism. Maybe that played into somebody's thinking about taking this case.

Beyond that, I think we know that Clarence Thomas doesn't like anything about 230 that includes immunity for platforms for illegal content, but what he tends to talk about more is wanting platforms to be compelled to carry conservative voices more and prevented from "censoring" users. I think Thomas is a safe vote against platforms generally.

Then beyond that, I don't know what to predict. Yeah, I don't know what to predict. I'm not going to take the bait here.

Evelyn Douek:

Fair enough. Actually, Humanitarian Law Project is a really good transition to talking about the other case here, which is Taamneh. Humanitarian Law Project is infamous for being quite a surprising and broad ruling for finding that speech by an NGO that was training a foreign designated terrorist organization on peaceful activities could be found to be material support of terrorism. That's one of the things I do have a concern about in this context. Holder v. Humanitarian Law Project is this idea that the Court is not as solicitous of protecting free expression in this context, in the context of terrorist speech, but that doesn't necessarily mean that the ruling won't have broader applications.

I do want to spend some time talking about Taamneh because I think it could be a really important case with quite broad ramifications, but it really is flying under the radar, I think, a little bit with all of the focus on Gonzalez. I think somehow the words Section 230 have become clickbait, which means that everyone is focusing on that. We've talked about it a little bit, but just to remind listeners as we dig in, what's the question in Taamneh, and how does it relate to the claim in Gonzalez? There is this interesting interplay between the two cases.

Daphne Keller:

Yeah, so the two cases started out quite similarly. They're both claims based on horrific acts of terrorism and murder, one in Paris and one in Istanbul, and both of them involved the amplification or ranking of content. In Taamneh, the issue that wound up procedurally going up to the Ninth Circuit was not the 230 issue, but instead the question of liability under the ATA, the Anti-Terrorism Act, regardless of 230, whereas in Gonzales, the 230 question wound up being teed up. If, in Gonzalez, the holding is the 230 immunity applies, then it doesn't matter what the answer to the Taamneh question would be about liability under the underlying law. If the 230 immunity goes away or is limited, then the question about liability on the merits of an Anti-Terrorism Act claim comes up.

I am worried about that for a number of reasons. One thing is the Ninth Circuit below indicated that there's a sufficient causal nexus that offering a platform with general access which then gets used by some ISIS members, although not the ISIS members who carried out the attacks in that case, that that somehow is a sufficient causal nexus to sustain liability. That's an idea that has been rejected in a number of other cases.

Then the other part that I find even more worrying, and this gets back to our discussion about knowledge earlier, is the court said that Twitter could be charged with knowledge for Anti-Terrorism Act purposes even though it's unclear what there is to know here. They certainly didn't know about any specific content that led in any way directly to the harm at issue in this case. They took down any ISIS content they became aware of in the relevant time. The claim is that because they had been told, "Hey, there's other ISIS content on the platform," but they had not adequately found it and taken it down, that put them in a state of having knowledge enough to support liability under this case.

That's the part that really worries my wonky soul because it makes the case, potentially, a referendum on how much content moderation or how much affirmative searching for illegal content we should expect of platforms. Those are really important questions. They're questions that the EU policymakers have been wrangling with for years, and taken all kinds of evidence on and reached all kinds of conclusions, and had litigation saying that under EU law it's the equivalent of a First Amendment problem to mandate affirmative efforts to seek content moderation in too broad a way. To have that whole mess of really important questions come here under the guise of being a question about knowledge under an unusual statute, and go to the Supreme Court with almost no factual record on any of these questions, seems like a terrible way to resolve any of these issues.

Evelyn Douek:

I want to underline a couple of things you said because I think they're really important. The necessary mental element is really important here. To be clear, it's not just saying, if you have terrorist content on your platform, you are necessarily aiding and abetting. There's this knowledge requirement, that you have to be knowingly aiding and abetting. But the risk here is that the knowledge requirement will be read really, really broadly. I think in Gonzalez, they're citing things like the fact that there were congressional hearings and lots of media stories about terrorist content on platforms that should have made platforms generally aware that this was a problem and should have made them do more. But doing more or doing better is not a standard of being clear on what platforms should do.

You mentioned the factual record, and so I just want to pick up on that because I think this is also another thing that's interesting. That's how many steps we are away from actually finding that platforms were liable in these particular cases. It's my understanding that we're talking about these interpretive issues here, but depending on how the Court decides, they're going to be... It's not like they're going to come down and say Twitter is liable in this case for aiding and abetting already, that they're going to be sent back down to the lower courts for factual findings. Is that your understanding as well?

Daphne Keller:

Yes, that is my assumption. I think that the justices will be cognizant of what a common word 'knowledge' is in the law, how many statutes there are out there that predicate liability on knowledge for things ranging from copyright infringement to worst of the worst, like child sexual abuse material content on the internet, to things that have nothing to do with the internet, like whether a landlord knows that his tenants are dealing drugs out of his house. A ruling here that arrives at a weird concept of knowledge where it instead means should have known, could have known, could have done more, could have done a specific thing more, this whole range of mushy things, if we say that's what knowledge means in this statute, that has ramifications for an unknown number of other statutes.

Evelyn Douek:

Right. It's the same, I think, with aiding and abetting, because aiding and abetting is a series of words that appear in many other contexts. We have states passing a bunch of laws criminalizing aiding and abetting abortion, for example, and so that is another area. Those are just two things that might jump to mind, potentially very broad ramifications of finding aiding and abetting in this context.

That could also be the case if Section 230 remains broad because, as you mentioned at the top, there's federal criminal laws that platforms aren't immunized for, even with the current state of Section 230. Maybe in some sense we should be hoping for a terrorism speech exception, which in some sense is bad, but that's what Humanitarian Law Project stands for, this idea it seems to be suggesting that there's this carveout for foreign terrorist organizations. I don't know. I don't know how you think about the potential broadness of what might happen in this case.

Daphne Keller:

Yeah, a terrorism speech exception, you might limit the damage somewhat by not changing the meaning of 500 other statutes that nobody's noticed. It's worrying to me because the specific kind of speech and the specific people who get silenced through overzealous enforcement of foreign terrorism laws in the U.S., the designated organizations under U.S. law, are overwhelmingly Islamist organizations. I think it does create this specific risk both of disparate impact, as I talked about before.

Also, the speech at the margins that gets taken down might be speech about legitimate grievances or illegitimate grievances, or arguments about what grievances are and are not legitimate. It just touches so closely on really important political questions. The speech potentially being silenced there is particularly, and again, this isn't a U.S. concept, particularly high value, particularly important to democratic goals.

Also, I think, and I wrote about this in a piece called Internet Platforms a few years ago, I'm not sure that I buy that we are all safer in a world where platforms are being incentivized to go out and do this overzealous enforcement, because so much of the research on radicalization indicates that the people at the greatest risk of becoming violent extremists feel marginalized, excluded, disrespected by the society they live in or by the society that they target. Having the most powerful platforms in the world silence you improperly because you're a teenager in Brussels speaking in Arabic about immigration policy, that is not a way to stop people from feeling marginalized or excluded or mistreated. I'm not sure that the downside of overzealous enforcement and what it does to the affected people is something that we've thought about carefully enough.

Evelyn Douek:

Yeah. When I said we should be hoping for a terrorist speech exception, I meant, obviously, in a world where... In a bad world, maybe that would contain the damage of this, but I completely-

Daphne Keller:

Right.

Evelyn Douek:

... agree with everything you just said. As a foreigner myself, I have lots of issues with the way that the Court has generally completely undervalued foreign speech in a lot of cases.

Do you have a prediction on this one then? I tend to think that a finding here of liability or potential aiding and abetting, it would be so extraordinary, for all of the reasons that we've discussed, that this might be an opportunity for the Court to play good cop, bad cop, say, "We're going to narrow Section 230. It's been too broad. When a platform recommends and pushes content to users, they could potentially face liability in certain circumstances, but the knowledge requirement needs to be more explicit. There needs to be a stronger causal link. This doesn't qualify as aiding and abetting."

To me, it seems crazy because of all of the potential ramifications of this. We talked about all the different statutes and all the different contexts where this might apply. That would be quite an extraordinary ruling. I'm wondering if you agree or if you're more willing to read the tea leaves on this one.

Daphne Keller:

I have a fantasy where they realize how complicated this is, and snarly, and how little it resolves the questions they really wanted to speak to, and they decide that cert was improvidently granted and they don't review these cases at all. That's not a-

Evelyn Douek:

A fantasy or a prediction? Yeah, exactly.

Daphne Keller:

It's more of a fantasy, but it's more likely in this case than most, I would say, because I think whoever made the decision to accept this probably didn't understand the sprawling connotations as much as they probably would in a case that's more familiar territory for the Court.

Supposing they keep it, I certainly think some kind of baby-splitting solution sounds plausible. I'm going to jinx myself and be proven wrong, but I can't imagine the Court buying into the idea that a knowledge requirement is met in a case like this. I don't see how they get to ATA liability, but they might do some real damage over in CDA 230 land. I guess that's what I think is most plausible.

Evelyn Douek:

Speaking of snarly issues that maybe people aren't appreciating the complexity of, let's go back to the NetChoice cases and try and put this puzzle all together. I know you like puzzles, Daphne, so this one is perfect for you.

We have, on the one hand, a possible outcome in these cert cases where platforms are suddenly liable for a lot more content on their services, including content that they recommend, whether that's, in this particular case, terrorist content, or just more generally, that recommending content means you are now potentially liable for that content. Then on the other hand, in this wonderful podcast that we talked about a couple of weeks ago with Genevie Lakier, we talked about the NetChoice cases where there's states that are trying to force platforms to host more content and that the Supreme Court may be interested in upholding those laws. Now we have a situation where platforms could be liable for taking content down.

Is there any way to reconcile these two sets of orders for the platforms if these both come... This is a pretty extraordinary world where a lot of things would have to happen. Both the platforms would have to lose in every single one for this world to exist, but maybe that's the world that we live in. If, hypothetically, and I'm not sure you've had any experience of trying to comply with crazy laws, if you were in the situation of having to work out how to comply, what could platforms do? How would they thread this needle?

Daphne Keller:

This is one of those three-dimensional wooden puzzles where you try and try to jam the pieces together into a coherent shape and they just won't jam together. The Texas law does have... It requires platforms to be viewpoint neutral in a lot of their content moderation, but it does have an exception where they can take down content that is illegal. Weirdly, it has a notification process for users to tell platforms when they think something is illegal, although other than responding to the notifier, the platform doesn't have any particular next step it is supposed to take.

A platform that either removes content based on a loosey-goosey belief that maybe it's illegal or takes it out of ranked newsfeeds or recommendations based on that same loosey-goosey belief that it's illegal, maybe would fare okay in Texas court saying, "Oh, I thought I was just going to do this. I was just taking down illegal content." Maybe particularly, I'm thinking out loud here, which is always risky, maybe particularly if the error is mostly falling on people outside of Texas and, indeed, outside of the United States, so there isn't somebody being harmed by over-removal who is in a position to bring a claim in Texas.

Evelyn Douek:

Excellent. That's-

Daphne Keller:

Or not.

Evelyn Douek:

Yeah. Who knows what's going to happen in this mess? I feel like there's no better place to leave this than the image of Justice Thomas trying to jam pieces of a wooden 3D puzzle together in his chambers, maybe getting his clerks to help. Thanks very much, Daphne, for working through all of this with us, and I'm sure that there will be plenty of opportunity to keep trying in the coming months.

Daphne Keller:

Thanks for having me.

Evelyn Douek:

This show's catchphrase that I'm definitely trying to make happen is that everything is content moderation, so that means that the next episode could be about anything. We are available wherever good podcasts are hiding, including Apple Podcasts and Spotify, and transcripts are available at moderated-content.simplecast.com. If you could take 15 seconds to review the show or 45 valuable seconds to review it, that would be amazing and I'd be very appreciative. If you have ideas for guests or topics that you want to hear on the show, let me know.

This episode of Moderated Content was produced by the brilliant Brian Pelletier. Special thanks to Alyssa Ashdown, Justin Fu, and Rob Huffman.