Moderated Content

Tech Law SCOTUS Superbowl Second Half: Taamneh

Episode Summary

Evelyn speaks with Moderated Content's Supreme Court correspondent Daphne Keller again to discuss the oral arguments in Twitter v. Taamneh, and the big elephant that was missing from the courtroom.

Episode Transcription

Speaker 1:

They warn you when you're coming in that if you are disruptive, you will spend the night in jail.

Speaker 2:

No way. Really?

Speaker 1:

Really.

Speaker 2:

Well, free speech. There you go. "If you say anything, we will lock you up." That is the Supreme Court for you.

Hello, and welcome to Moderated Content, podcast content about content moderation moderated by me Evelyn Douek. We are here to discuss the second half of the tech law, Supreme Court Superbowl this week, which was the Supreme Court hearing in Twitter v. Taamneh on Wednesday, which was to do with the weather. A platform can be held liable for aiding and abetting terrorism under the Anti-Terrorism Act for failing to remove all ISIS content off its platform. We have, of course, our Supreme Court correspondent live from the scene, Daphne Keller joining us again.

Daphne, I feel like this has all of the twists and turns and highs and lows of a good Super Bowl in some sense. We last spoke after Gonzalez, and I think someone said to me that we sounded pretty optimistic after that case because we had gone in with these huge fears about the Supreme Court breaking the internet and then it sort of seemed like they were sort of also scared about breaking the internet and so may be less receptive to the plaintiff's argument. And so then I think I made some prediction that, oh, well, clearly they weren't finding a liability in Taamneh and so it's okay, maybe this will all just go away, and then Taamneh happened and I was not feeling so optimistic after those hearings. I'm curious how your emotional journey has been and how you found the Taamneh hearings.

Speaker 1:

Yeah, so first everyone should know I am in fact standing in front of the Supreme Court with a microphone right now because that's what Supreme Courts correspondence do. I'm a natural-born pessimist, so I didn't leave Gonzalez with too much optimism, but there were definitely any number of people coming out of there saying, well, maybe sure, they sure should did skeptical of the Merits claim, the Anti-Terrorism Act, ATA claim in the other case Taamneh. And so we expect plaintiff to really be given a hard time tomorrow in the hearing about and maybe the ruling will just be that the platforms win on Taamneh and then the court doesn't even need to answer the 230 questions in Gonzalez because they will be moved and they can be dismissed as in confidently granted.

So we went into Taamneh. Going in, it was scheduled for 60 minutes. We were doing some over under bets in the line about how long it would run. I said an hour and a half, which was way more than anybody else was guessing. And then it actually went two and a half hours. And it was not because they were giving plaintiff a hard time, or at least not entirely because they were giving the plaintiff a hard time. They had a lot of hard questions for the platform's lawyer, Seth Waxman, and he didn't necessarily have great answers for those questions.

So I think by the end, I felt like enough justices were expressing skepticism about plaintiff's claim that I still think it's quite possible that plaintiff just loses the merits claim under the ATA in Taamneh. But it is harder to say I'm less optimistic than the mid-level pessimism I was experiencing on Tuesday.

Speaker 2:

Okay. I think we're both pessimistic about what a bad ruling, in this case, would mean for online free expression. And I think it's important to be clear about the facts here because this does come up later as well. In this particular case, there is no proof that the attacker in the Reina nightclub attack that was in issue here where the victim lost their life, that attacker ever had an account on Twitter, Facebook, or YouTube. There was no particular piece of content that was identified that Twitter had failed to take down or that they knew about certain accounts or anything like that. It was more sort of that they failed to do enough. And so there's this really broad understanding of aiding and abetting if that's found to be enough.

But even if you don't sort of worry too much about the terrorist speech, there are other sort of speech interests here at stake. And Daphne, this is something that you've written and thought a lot about and I think it would be useful to set out those stakes at the top of the conversation so that people understand going in. So what are the free expression interests here at stake that weren't in the courtroom when this was discussed on Wednesday?

Speaker 1:

Sure, they're pretty straightforward. Basically, the more platforms face legal risk for user speech, the more they are incentivized to protect themselves by taking down both illegal speech and other speech that only might be illegal. The more user speech rights are compromised. And we know from mid-century Supreme Court rulings about bookstores that if the government goes too far in incentivizing an intermediary in that case of bookstore, to silence the speech it's carrying. In that case, the books for readers to read that can actually violate the First Amendment. And it's not that it's violating the First Amendment rights of the intermediaries, like the rights of the platforms in this case or the rights of the booksellers in that mid-century case, which is called Smith v. California; it's about the rights of the users, of the people who write the books and the people who read the books and the people who post speech on the internet and the people who want to read speech on the Internet.

All of those ordinary people suffer and suffer a First Amendment harm if the law leads platforms to go too far in silencing them. And that's just an extremely basic underlying concern in intermediary liability law. And it's also one that almost always, everywhere in the rest of the world and in the US for copyright purposes, it's always litigated through cases that are about the same things as this case was about, namely the platform's degree of knowledge and their degree of contribution to the harm in the case. So it's wild to me that speech interests were barely brought up, particularly because this is a case about so-called terrorist speech, and certainly, there is speech posts by ISIS or recruiting efforts by ISIS that may well be sources of liability under US law and might be what we could call illegal speech.

But if platforms are trying to take down the terrorist speech and they are risk-averse, so they're taking down everything sort of adjacent to it, that might create liability, that's a bunch of core political speech that gets hit. You start seeing the kind of overremoval situation the Syrian Archive had where these carefully preserved videos that were intended for future prosecution of human rights abuses got deleted by YouTube, presumably because they thought, "Oh, this looks like terrorist speech, we're in danger, we need to take it down." You get situations where platforms deploy technical filters that can't tell the difference between an ISIS video posted by ISIS for recruitment. And that same video used in journalism or in scholarship or in counter speech.

You get situations where if platforms don't have the resources on hand to have nuanced understanding of Arabic language, for example, or Chechen language, their incentive is to just go overboard in silencing anyone who's speaking those languages or anyone who's talking about Islam. So there's a big conspicuous speech issue if the outcome of this case is to incentivize platforms to go too far in trying to protect themselves by removing user speech. And it was just not present in the courtroom in the discussion at all.

Speaker 2:

It was quite remarkable. There were these long conversations about the appropriate analogy between what if a terrorist is using a bank account. Is this like a bank and should a bank know if that terrorist is using that money to plot a tax or is this like a restaurant where a restaurant is serving a terrorist a meal? And no one at any point said yes, but there's one big distinction in this case, which is the assistance is about speech. Do you have any theories about why? How did this happen?

Speaker 1:

Well, I think it had a little bit to do with briefing deadlines because for the Gonzalez case, the briefing deadline was January 18th and it got 48 briefs. I think that's the right number on the side of the platforms, and many of them were about speech issues as those speech issues arise in the 230 claim or in the possibility of eliminating 230 protection for ranked content. Taamneh had a briefing deadline that was originally, I think two days after Thanksgiving and then it got moved a little bit later. But very few organizations that are concerned about speech rights scrambled fast enough to get something in on time. Also, not that many American organizations are really cognizant of the speech problems here, or used to dealing with claims, like tort claims that are about knowledge and material contribution. And so even the entities that could very easily speak to that European organizations or international human rights groups that have gone through fights about these exact same questions in other countries, they wound up filing later.

So there's a brief in Gonzalez from Article 19, the International Human Rights Organization, and from David Kaye, the former UN free expression Rapporteur raising a lot of these problems and pointing out specifically the negative speech consequences that have come up when platforms do try to go out and eliminate terrorist speech or purge their contents of anything that their platforms of anything that might be terrorist speech. But that's a brief in Gonzalez, I'm not sure, other than I think the Center for Democracy and Technology, God bless them, did get in a brief in Taamneh, but they didn't hear this outpouring of concern in that case the way that they did in Gonzalez.

Speaker 2:

Yeah, it was remarkable. Okay, so let's talk about what the justices did talk about then, and you sort of have averted to a few of them already, but one of the big things that was spent a lot of time talking about is the level of knowledge that's required. And there was all of this playing around with the different facts about what level of knowledge or what level of generality a platform has to have knowledge of in order to be found liable for aiding and abetting.

So Justice Thomas right out the gate had this hypothetical of, "If I have a friend that's a burglar, he's a murderer, but otherwise a very nice guy, and he asked me, I've got some business to do, can I have a gun? Is that sufficient? But you don't know what he's going to use the gun for. Is that sufficient knowledge?" And then Justice Sonia Sotomayor was playing with this idea of willful blindness and concerns about if a platform turns a blind eye to how its platform is being used, is that going to be sufficient for knowledge? So what did you hear from the justices in terms of how they're thinking about this knowledge standard that might be applied here?

Speaker 1:

I heard a whole lot of frustration, and I think part of that frustration is with the test that Congress said it wanted courts to use-- it said to apply this multifactor test under a case called Halberstam. And Justice Gorsuch in particular was literally urging counsel to give him a way to reject Halberstam and have a simpler test. But also, the key elements that the parties needed to argue about were number one, knowledge and number two, substantial assistance, and then possibly a number three, assistance to whom? Like, who is the entity you're assisting? And they kept getting answers, particularly from the platform's lawyer, Seth Waxman that conflated those things. And Amy Coney Barrett in particular kept saying, "I'm not talking about knowledge. Give me an answer that's just about substantial assistance." And then she would get an answer that brought in knowledge. And that's somewhat understandable. There are practical reasons those things become intertwined, but they really are different and they matter in somewhat different ways.

And the sort of frustrating thing I think for anyone in the courtroom who is a criminal defense lawyer or anyone in the courtroom who's a copyright lawyer in the US is, people in those fields argue about these topics all the time. And in US copyright law, they get argued about in the context of platforms and with clear acknowledgement that the results will affect user speech. And we talk about the range of things that knowledge might mean. In some countries that have platform liability based on knowing about unlawful content, Supreme Courts, this is in Argentina and India, both those countries, the Supreme Courts looked at this question, they had a knowledge standard for platform liability, and they said if any allegation can cause the platform to have culpable knowledge, then they'll just go out and take down too much content and cause all these problems I described before.

And so we think in the context of platforms for some or all claims, the platform only has knowledge that something is illegal once a court has ruled that it's illegal with the defendant getting due process and so forth. So knowledge could mean you don't have knowledge sufficient to have to take things down until a court order issues. Now I think most people don't want that to be the standard for everything. Obviously, I think you wouldn't want that to be the standard for child sexual abuse material, for example. And we might all have different ideas about which things should and should not as a matter of policy, require court orders, but that's a thing knowledge could mean. Or knowledge could mean really. Anytime you get an allegation, even if it's implausible, then you have knowledge, then you've lost your immunity.

And Justice Gorsuch and well, all of them, but Justice Gorsuch in particular kind of kept coming back to this and saying at some point the claim of knowledge is too attenuated and it just can't be right to claim that the platform has culpable knowledge. And please help me understand when that point has been reached. And Seth Waxman gave him an okay answer. I can't remember how he formulated it. It was like if enough information has been presented that reasonably the platform should know or something like that, it didn't have the word reasonable in it. And that's not too far off from how the DSA, the Digital Services Act in the EU for example, has defined this.

I think theirs is like, I can't remember the language, but it's substantiated allegations or something like that. It is not a new exercise in platform law and are just not a new exercise in tort law to try to figure out what knowledge means in a particular circumstance. But none of the precedent or understanding or connection to user speech rights that should be relevant for that seemed to be on display at the court this week.

Speaker 2:

And it was in this context that one of the more remarkable exchanges or concessions from Seth Waxman occurred when he gave the example. He just volunteered entirely himself, I think surprisingly to say, the example of ... If the Turkish police, the Istanbul police came and say, here are 10 Twitter accounts that appear to be involved in planning some sort of terrorist attack, and then the Twitter said something like, oh, all people do all sorts of things and refuse to take them down, that that would be a culpable standard of knowledge. And this again is like there is a vast literature on why we should be extremely concerned about platforms just taking things down if a government actor comes and says, Hey, take these 10 accounts down, and in fact right as that is happening in Turkey, the Turkish government is prosecuting journalists for terrorism.

And just last month Republicans had six hours of hearings on the hill about fears about Twitter's too close relationship with the FBI over the Hunter Biden laptop story. This really surprised me that Waxman gave this concession that a law enforcement officer coming and saying, you should take these accounts down would be sufficient knowledge for aiding and abetting liability. What did you make of that exchange?

Speaker 1:

That, it was wild. I think it was wild, even if you just think about US law because we have cases like Bantam Books and cases like Dart v. Backpage about how when law enforcement or entities that aren't courts go around trying to tell intermediaries what speech is illegal, that's a First Amendment violation. We have this understanding even in US law, but then to have it be about Turkey in particular, as somebody who was a platform lawyer until 2015, I get a lot of calls from people at smaller platforms saying, "Hey, do you know any good lawyers in country X?" And for a long time the top values of country X have been Russia or Turkey because both of those countries have an awful lot of government officials coming along wanting platforms to take down content because it's embarrassing to politicians or for other or because it's critical journalism.

And so it is just particularly wild that Turkey was the example. There are a lot of people who in until very recently worked at Twitter who know perfectly well what kinds of notices come from Turkey. And I don't want to say they're all invalid. Certainly, there are claims about things that actually are illegal under Turkish law, and that should be fine to take down. But the idea that every single thing that Turkish police say can create liability in the US under the Anti-Terrorism Act is, it's too bad that Elon Musk fired all those lawyers. It's too bad that apparently, that concern didn't percolate through to the prep for the platform's position here. It also seems like the current Twitter management's strongly expressed preference to not take things down just because government actors want them taken down. That maybe didn't percolate through to this oral argument either. That part of it was remarkable and frankly quite upsetting to me just to be looking up there and seeing no one speaking for the user speech rights in two and a half hours of arguments before the court.

Speaker 2:

Completely agree. It was through the looking glass in that moment. Speaking of Musk's changes at Twitter, there was this moment where Justice Kagan asked a complete hypothetical of Twitter to say, look, let's take the facts that you have them. You have a platform that there's ISIS content on it, and people generally know that ISIS uses social media platforms, but I'm going to change one fact, which is that this platform doesn't have a policy of taking down this kind of speech or it doesn't have a big program for taking down terrorist content.

Of course, the Twitter of today, I mean it still is as far as I know, has policies against terrorist content and is moderating that content, but the Twitter of today is not the Twitter of 2017, the facts that this case is about, and it is completely differently staffed and resourced. And of course, there are a bunch of other platforms that don't have the same kind of content moderation programs. And I thought it was very interesting that Kagan was just raising this complete hypothetical about what if you have a platform that doesn't have an extensive content moderation program. I was curious what you made of that line of questioning and whether that should make any difference under the ATA.

Speaker 1:

Well, it's funny, I mean, over in Gonzalez, there's this idea that platforms need to be neutral or have neutral algorithms at least to avoid losing their immunity. So do the plaintiffs want a world in which platforms step away from neutrality and make choices like taking down terrorist content, or do they want a world where platforms have to maintain some kind of neutrality in order to be immunized under CDA 230? It's difficult to be potentially asking for both of those things.

On the question of whether platforms should have to have policies against terrorist content in order to avoid ATA liability or preserve, I don't know. I think the answer Seth Waxman's answer was no, that platform still doesn't violate the ATA and I think that's probably right. As a matter of ATA and tort law, they don't have a duty to adopt particular policies. On the other hand, you could argue that regardless of what their policies are, if they are given concrete knowledge of specific instances of terrorist content, they still need to act to take that down is, I don't know. It's a hard question to thread through. And I think it really depends on what we think the basic tort law principles of the ATA are.

Speaker 2:

And I guess there was a question in the air, although there wasn't so much discussion of this as I expected, which is the ATA is an exceptional statute in many ways, and it was intending to enact this extremely broad concept of secondary liability really to intentionally allow people to sue parties that aided and abetted terrorism on a broad basis. And so there was this question, I guess, of how much is this ATA-specific liability for aiding and abetting, and how much of this is just general aiding and abetting liability not to do with foreign terrorist organizations? Is this an FTO kind of exception to potentially the free speech interest that we should be thinking about, or is it sort of broader basis?

And sometimes it seemed that the justices were prepared to sort of engage in this broader question. Like for example, there was talks of gangs and gangsters and other kinds of criminal activity, which surprised me, but am I again, being too pessimistic? How much do you think this was context-specific and terrorist-specific conversation and how much of it was broader?

Speaker 1:

Well, I mean, this is a very textualist court. They want to be guided by the language of the statute and the language of the statute says aiding and abetting and an attacks on the word knowing to that, I'm sorry, I'm looking through my notes trying to see who said this. I think it was Gorsuch who said, "If this were a criminal case for aiding and abetting, you would have a high mens rea such as intent." And so clearly the platforms would not be ... Well, I think clearly, certainly, in this case, there's no indication the platforms would be liable under that standard. And part of what's weird is that civil aiding and abetting standards are just unusual. So there aren't that many sources of law to look to. But even in the sources of law that do exist to look to, which are like this Halberstam case that Congress referenced, and then some older cases about securities law, which are I think like bad law for other reasons, but the aiding and abetting analysis remains intact, it's not crazy broad.

I don't think there's something about the terrorism context that makes this aiding and abetting language reach farther than other aiding and abetting language would in other cases. And it doesn't seem like that language generally would reach platforms in this situation. The thing that I think kept getting confused and Elena Kagan kept having to clarify it, is this was not a claim under the material support statute. The material support statute was the anti-terrorism law, the ATA provision that was at issue in a First Amendment case called Holder v. Humanitarian Law Project, where Kagan was the solicitor general. So she was on the prosecution side.

In a material support case, the contribution prong, the Actus Reus is met, even if all you did was fund a hospital or give any kind of support, even if that support any kind of support to a designated bad actor, even if that support didn't contribute in any way to an act of terrorism other than by displacing resources to make them available for the terrorist goals of the organization.

So in a material support claim, presumably just providing a general purpose platform that ISIS uses, maybe that could meet the actus reus part, then the fight would be about whether the platforms had sufficient knowledge or sufficient mens rea for liability. But this isn't that claim. As Kagan had to point out twice. This is a claim where the actus reus is substantially assisting a particular act of terrorism. And so there is a line of defense and important question to fight about how attenuated the connection is between what Twitter did, which is to provide a general-purpose service, be aware ISIS is out there using it somewhat and take down ISIS content or posts when they knew about it. Is that causally related enough to the harm in this case that you should say the substantial assistance prong is met or not? One of the justices brought up this sort of butterfly effect concern that of course, everything is connected to everything and we still need to draw a line and figure out where the liability lies.

Speaker 2:

Of course, Humanitarian Law project is a sort of famous or infamous case for its lack of First Amendment protection for the speech issue there, which was sort of legal advice to FTO that was totally unrelated to particular terrorist activities. As you say, Kagan was the SG and I think the sort of lack of solicitude to First Amendment concerns in that case, which I think is sort of widely understood as being somewhat terrorist-exceptional, you just wouldn't sort of have that same exception to the First Amendment protection in a domestic case. There's something sort of strange happening there that sort of was maybe feeding through to explain the lack of First Amendment concern in this case because it is sort of a similar kind of context.

Like I said, the First Amendment only came up once, and it was ironically brought up by the plaintiff's lawyer as a reason why the court didn't need to be too concerned about too broader conception of aiding and abetting in this case because the First Amendment would solve the problems later. So Kavanaugh gave this example of a CNN interview of Bin Laden sometime in the early 2000s that apparently was used for recruiting purposes. And he was asking the lawyer, would that expose CNN to aiding and abetting liability for hosting this speech that then later was used for recruitment purposes. And the lawyer said in response, "Oh, no, the First Amendment would take care of that." And Kavanaugh didn't sort of really press the point, but it was remarkable to me that that was the only context in which the First Amendment came up.

Speaker 1:

It was remarkable. And there's just this key difference, which in that case, if somebody sued CNN, that's CNN's own speech and own activity on the line, and of course, they're going to defend it, and of course, they're going to litigate the First Amendment issues. And so if this were not a case about speech intermediaries about platforms, then saying, oh, the First Amendment will solve it, it would still be a terrible answer because we need to figure out how this statute actually is going to avoid that problem. But at least it's plausible that in the future if this issue comes up, someone's going to litigate over it.

But here, the speech that's on the line is the speech of platform users, but the entity that would have to litigate to defend it is the platform. Like the user is never even going to know if they got silenced because of the court's ATA interpretation. They're never going to have a chance to litigate. They're never going to have a chance to raise these first amendment arguments. And so particularly in the platform context, the idea that you can just take hard speech questions and kick them down the road to be litigated later is wrong.

Speaker 2:

Because even if they knew, there'd be all sorts of standing issues and things like that. And so the idea that, oh, we can just, exactly as you say, kick this down the road may sound attractive in theory, but in practice that's not how it would pan out. Any other big sort of takeaways, surprising moments, anything else from being in the courtroom that you think's worth mentioning?

Speaker 1:

I don't think I have more from the courtroom. I do have one just longstanding issue in the case to point out, which also didn't come up in the courtroom, which is plaintiff's theory is to avoid liability, it's not enough that Twitter took down ISIS posts and accounts that it knew about, specific ones that it knew about. Rather because Twitter was aware that various actors, news media and government included, had said, Hey, there's a bunch more ISIS content out there, that this general knowledge of other potential ISIS content created a duty for Twitter to go out and look for it and go out and take it down. One of the things I was saying before is a bog-standard question in copyright law or in other areas of intermediary liability is like, what is culpable knowledge? Is it the specific knowledge of specific posts and then you can protect yourself as a platform by taking them down? Or is it as general knowledge enough to cause you to have to go out and do something else, like proactively search for the content?

And the problem with standards that cause platforms to go out there and police everything that users say in search of illegality are substantial. And it's a huge topic in international human rights law discussion. It was and continues to be the center of debates about platforms and free expression in the EU. And that dispute just kind of hasn't surfaced very much in the US and people aren't very aware of it. I mean, in the EU, the concerns about causing platforms to proactively monitor user's speech and then very likely take down far more speech than the law would actually require, that was enough to cause the European Parliament to reject a monitoring obligation for terrorist content on free expression grounds.

And now here we are back in the US with a plaintiff asking for effectively a monitoring requirement for terrorist content, and nobody is mentioning the free expression grounds that determine the outcome of this in Europe. The US might just back into having the very rule that Europe rejected. There's a US-specific version of this problem, which as we know from case law about child sexual abuse content, if the law requires platforms to go out and proactively monitor and then to report what they find to the government, that can turn ... which platforms do in the terrorist context in many cases, report what they find to the government, that can turn the platform into a state actor for purposes of the Fourth Amendment for purposes of surveillance law. And what that means in practice is that when these actual very bad people who've been disseminating child abuse content or involved in terrorism get prosecuted, evidence against them can't be introduced because it was obtained in violation of the Fourth Amendment.

So there's this very real US problem with monitoring obligations, which was well enough understood that in the last iteration of the CSAM laws, they spelled out nothing in here creates a monitoring obligation in the copyright laws, the Digital Millennium Copyright Act that spells out nothing in here shall create a monitoring obligation. But somehow here in this case, the idea of a monitoring obligation kind of snuck back in without anybody recognizing or speaking to that set of problems.

Speaker 2:

If anything, the concern from the bench I heard was more the opposite, that there was "what would happen if it weren't the case, the platform didn't have to continually monitor or weren't looking out for this stuff." It was remarkable. It's not the way that I expected the argument to go, I have to say in either case. So I guess I'm going to sound a more pessimistic note at the end of this conversation than I did at the end of our last conversation.

Speaker 1:

Actually, can we do one more thing?

Speaker 2:

Yes, absolutely.

Speaker 1:

Sorry, before you wrap with a bow, which is, which justice was it that did the hypotheticals about what if there's a bad gangster in town and the police know for sure that he's a criminal, but they can't manage to convict him, they can't manage muster enough evidence. So no court has ruled on it. Can the police go out and tell the ISP to stop giving this criminal internet access? Can they tell the electric company to cut him off? Can they tell the phone company to cut him off? Can they tell the food delivery service to stop bringing him takeout? And that was quite a moment for me because in teaching this stuff, we often work through these exact same hypotheticals in my class. Which of these entities do you want to be cutting off service recipients based on suspicion of criminality? And then what role should the police have in prompting that to happen? And so that was a useful moment for me to have one of the justices work through those hypotheticals or ask the parties to live from the bench.

Speaker 2:

Yeah, that was Justice Alito, I believe. I mean, a number of justices brought up that brought up the issue of gangsters and Justice Jackson was playing with this idea of what about known gangsters, which again, was kind of wild to me because we know a lot about the discriminatory impacts of gang policing and the interest at stake there. So again, like you say, it was kind of wild.

Speaker 1:

And terrorism law enforcement, of course, doesn't have any discriminatory impacts; that's not a problem we've encountered.

Speaker 2:

Well, if you were listening to argument on Wednesday, you absolutely wouldn't know it, that's for sure. So I mean, I just felt bewildered, honestly, because I felt like I was living in another universe where what I thought was the core issue of this case just wasn't being talked about at all. You might come out different ways on what the core issue was, but the core issue of the speech interest at stake just didn't even enter the room.

Speaker 1:

Yeah, it was remarkable.

Speaker 2:

There was lots of swearing going on. Less from you, though, I assume being in the courtroom.

Speaker 1:

I struggled to keep my mouth shut.

Speaker 2:

Yeah. Well, I'm glad you did and you didn't get hold away for being disruptive.

Speaker 1:

They warn you when you're coming in that if you are disruptive, you will spend the night in jail.

Speaker 2:

No way, really?

Speaker 1:

Really.

Speaker 2:

Well, free speech. There you go. "If you say anything, we will lock you up." That is the Supreme Court for you. Okay. This has been great. Thank you very much, Daphne, for our Superbowl wrap-up here. This has been moderated content. The show will be available in all the usual places, including Apple Podcasts, Spotify, and transcripts come up at law.stanford.edu/moderated content. The show is produced by Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman. See you next week.