Moderated Content

Tech Law SCOTUS Superbowl First Half: Gonzalez

Episode Summary

Evelyn speaks with Moderated Content's Supreme Court correspondent Daphne Keller, director of the Program on Platform Regulation at Stanford's Cyber Policy Center, to discuss their quick takes on the Supreme Court oral arguments in Gonzalez v. Google.

Episode Transcription

Evelyn Douek:

Although I guess in this case, they could just decide not to build a nuclear power plant, because they could say, "Actually the nuclear power plant was improvidently granted, and we're just going to build a really good bike shed," potentially.

Daphne Keller:

Yes, and that's such a good metaphor really, because many things could go wrong with this nuclear power plant if they do build it.

Evelyn Douek:

Hello and welcome to Moderated Content, podcast content about content moderation, moderated by me, Evelyn Douek.

So we are here to discuss the argument this morning in Gonzalez v. Google. And before the argument, I had this whole intro written about how we were sitting at tech law Super Bowl halftime, a case that was billed as possibly changing the internet, because when the court is hearing Twitter v. Taamneh tomorrow. But I'm no longer quite so confident that this is going to be the watershed ruling that some might have anticipated. But we can get into that in a minute.

During oral argument today, Justice Kagan noted that, "We're a court, we don't really know about these things. These are not the nine greatest experts on the internet."

But luckily I have someone here who does know about these things and is one of the greatest experts on the internet. Our Supreme Court correspondent live from the scene, Daphne Keller. Daphne is the director of the program on platform regulation at Stanford Cyber Policy Center and was formerly Associate General Counsel for Google. Daphne, thank you so much for joining us.

Daphne Keller:

It's great to be here. I will disclaim actual internet expertise, but I will concede lawyer internet expertise.

Evelyn Douek:

Excellent. Okay, so I think if people have found their way to this podcast, they're probably generally aware of what this case is about and I don't want to waste our precious time really sort of talking too much about the background.

The case today, the hearing today was the question broadly, something along the lines of, if a platform loses its section 230 protection for recommending content, and we can get into what happened today around that word recommending, but just sort of straight-up Daphne high-level takeaway, are you more or less nervous about the possible outcomes of this case after hearing argument this morning?

Daphne Keller:

I am less nervous because the justices asked a bunch of really smart questions and they weren't all questions that necessarily indicated that they will side with me. And I filed a brief along with the ACLU saying that the platform side should prevail in this case. But they were questions that reflected some pretty deep thought on questions like what is a neutral algorithm? Or what is the difference between publishing and recommending? Or why, as you mentioned Evelyn, why should the courts rather than Congress do this line drawing? And so I am more encouraged to think that they will rule carefully and cautiously and with an eye to how much damage a poorly considered ruling might cause.

Evelyn Douek:

Yeah, I think that's my takeaway too, and in particular, I think that is in a large part down to the petitioner's arguments. I saw Tim Wu tweet this morning that he's not sure he is ever seen a party do so much damage to their own case in so little time. And it was a bit of a train wreck. Alito at one point said, "I'm afraid I'm completely confused by whatever argument you're making at this present time." Justice Jackson also said, "I guess I'm thoroughly confused." Thomas said, "Yes, what you said is responsive, but I don't understand it." There was a sort of situation where they were saying, "Help us draw a line." Justice Sotamayor at some point literally said to the council, "Assume that we are looking for a way to draw this line between section 230 protects a platform for hosting content, but it doesn't protect the platform for recommending content. Please tell us where this line is?"

And that was the one job that the Petitioner's Council had today, and he didn't really rise to the challenge. I'm wondering if that's your perception of it and what you thought about this probing around that line drawing that the justices were getting at.

Daphne Keller:

Yeah, I mean think... So the plaintiff's council is a very respected civil rights litigator with lots of Supreme Court experience, but not a tech background as far as I know. And as far as the briefs and the argument would lead me to believe.

And I think that a lot of his sort of flailing on this comes from a good place, which is he doesn't want to mess up the internet. He doesn't want search results to be impossible. And so he didn't want to present arguments that would just throw a web search under the bus or whatever. But he had a very hard time in the briefs, and again today, identifying what is a line that would allow him to win without having those consequences? And that's precisely why I think he shouldn't win because I don't think there is a viable line that could be drawn.

But what he did throughout the briefs was he kept trying to point to technical reasons to distinguish between YouTube recommendations and web search results. And he did so by pointing to, I don't even want to say technical jargon, because it's not very technical. For a while he was saying, because YouTube creates a new URL, a new web address to host videos that URL itself is new content from YouTube that stands outside CDA 230 immunity and therefore he should win. And that's a crazy theory. If you know anything about how HTTP works or web hosting for uploaded content.

And then he shifted to this theory about servers and then today, live in court, he shifted to yet a third theory about liability based on the creation of thumbnails. Which for those of us who've been following litigation in this since the DMCA Perfect 10 Amazon case, we've been down this road before.

And in the copyright context that argument didn't work. And there's no particular reason for the argument. His argument being that the creating a thumbnail, still image from the videos, was an act of its own content creation that should create liability for the platforms. A. That's weird. I'm not sure how you're supposed to host videos and show what they are without thumbnails. B. I don't know how that relates to his argument that it's the act of making recommendations that creates the liability. It kind of seems like a different theory. C. It does not distinguish YouTube recommendations from web search because web search results for images have thumbnails. So I'm not sure where he was going with it but it didn't seem to work for any of the justices.

Evelyn Douek:

Yeah, this was a kind of bizarre feature of this morning. Thumbnails, the word, control F in the petitioner's brief appears once, whereas if you played a drinking game with thumbnails this morning, you would be on the floor and Google's lawyer-

Daphne Keller:

Yeah, and at one point there was one of the justices asked-

Evelyn Douek:

Right? Justice Barrett.

Daphne Keller:

... They were just a screenshot of the still image videos instead of a thumbnail. And everyone who in the courtroom who's even a little bit tech-savvy was like, "Yeah, it's basically what a thumbnail is, a screenshot," what is the distinction being drawn here? So it was rough.

Evelyn Douek:

Yeah. And Google's lawyer got up at some point and said, "Thumbnails aren't mentioned in the complaint." And so I was literally trying to work a work out what he was talking about while he was up there. And I guess from my perspective, if you are a justice and you are looking... You know that this is really tricky, that that was really apparent from the justices talking about it, that this is a really tricky thing and you don't want to get it wrong. And there's lots of high stakes about how the internet works. But you also have concerns about an absolute ruling either way, and you're looking for a line drawing. You would want counsel to be really well-prepared and hold your hand through it and say, "It's okay. Here's a really clear line for you to be able to draw." And I just didn't hear anything in the argument that would make them feel that kind of confidence or comfort.

Daphne Keller:

Yeah, and you did hear some of the justices asking, "Wait a minute, was this in your complaint?" And back and forth with the platforms counsel about what was and was not in the complaint, which could go to them deciding which issues they need to actually speak to.

And then Amy Coney Barrett, I think asked twice whether this case could just be dismissed as moot if the platforms win in Taamneh, which is the case they're hearing tomorrow about liability under the Anti-terrorism Act in which it's essentially the same facts and the same claim as this case, but focused on the Anti-terrorism Act liability instead of CDA 230. So it did seem like maybe they were starting to wonder if there was a way out of this.

Evelyn Douek:

Right. And I think one of the things that the petitioners focused on in their argument was the difference between immunity and liability. And this gets to the difference between Gonzalez and Taamneh, which I think is worth talking about.

So Gonzalez today was about this, at what point do the platforms lose their section 230 immunity? But tomorrow's case is about at what point do platforms become liable under the Anti-terrorism Act for that content?

And the plaintiffs were trying to make their argument, in my impression rested a lot today on the exceptional nature of this anti-terrorism act aiding and abetting liability. And the justices were really pushing on, well, what about other cases? Does this apply to defamation? And I heard a lot of skepticism from them about the distinction, but I'm curious what you made of this sort of really extended argument between the difference between immunity and liability.

Daphne Keller:

Well, so what I thought I heard going on was both the justices and the council kind of drifting toward talking about liability instead of immunity at times when it didn't really make sense to do so. Because as you said, "That's not what this case is about." This case is about the immunity and not the underlying liability. But I think all of them are just more comfortable with the liability framework. They've all, I'm sure considered untold numbers of aiding and abetting claims, and they're familiar with that way of thinking about it. I mean, maybe they kept asking about aiding and abetting because they wanted to find something in the elements of aiding and abetting that could then become also a basis for losing immunity. If you have the [inaudible 00:10:38] for aiding and abetting and you do the actions for aiding and abetting, then maybe does that defeat immunity?

But they didn't really tie it together very well. And so it reminded me a little bit of, I don't know if the term bike shedding?

Evelyn Douek:

No.

Daphne Keller:

It's the idea... I don't know what the urban legend behind it is. The board of a corporation is meeting to talk about building a nuclear power plant and also on the agenda is how to design the bike shed. And they talk about the bike shed for 45 minutes and the nuclear power plant for 10 because they're familiar with bike sheds. They have something to say to that. So it put me in mind of that a little bit.

Evelyn Douek:

Although I guess in this case, they could just decide not to build a nuclear power plant because they could say, actually the nuclear power plant was improvidently granted and we are just going to build a really good bike shed-

Daphne Keller:

Yes.

Evelyn Douek:

Potentially.

Daphne Keller:

And that's such a good metaphor, really, because many things could go wrong with this nuclear power plant if they do build it.

Then the other thing there, you mentioned how plaintiff's counsel was trying to make it sound like really it's only Anti-terrorism Act liability that could reasonably exist and defeat immunity here on plaintiff's theory. So you don't worry court, you don't have to worry about defamation, you don't have to worry about intentional inflection of emotional distress. And there was a similar back and forth with the advocate from the Solicitor General's Office where the court kept pushing him really hard on how his theory, which aligned with plaintiffs in the sense of saying that recommendations are stand outside of CDA 230 immunity. How does your theory avoid punching holes in 230 for defamation claims? And he kept falling back on not even an argument that immunity would be preserved, but instead that the platforms would win on the merits for defamation claims or for intentional inflection of emotional distress claims, which is not a satisfying answer at all if you think that the value of 230 is creating a rapid route to cheaper and more efficient dismissals of cases.

And a lot of the justices, maybe in particular, Justice Kavanaugh and Justice Roberts, I think were pretty focused on that sort of efficiency issue.

Evelyn Douek:

Yeah, I think it was Justice Alito actually flipping to Google's lawyer. Justice Alito at some point asked Google's lawyer, "Would Google collapse and the internet be destroyed if YouTube was liable for taking things down that it knows are defamatory and false? And the answer to that was, "Well no, actually, probably Google would be okay, but almost every other website wouldn't," and pointing to Yelp's brief in particular in saying, "Look, this is a much smaller platform. They don't have the resources of Google," and they talk about how onerous it would be and so how important these procedural benefits are for smaller platforms. And I thought at this point, Justice Alito was expressing a lot of skepticism of the petitioner's arguments, Justice Thomas as well, straight out the gate, was also being very skeptical. I think that there was a lot of skepticism on the bench.

Daphne Keller:

And another kind of going to what's special about the Anti-terrorism Act, one theory of why it's special is that these are claims about connecting people rather than claims about showing content to people.

But plaintiffs seem to disclaim that theory and say, "No, I'm not arguing that I can get through section 230 because this is about recommending an account or recommending people to each other," which was surprising to me because that seemed like a useful position for him, a card to keep in his hand. That was, the connecting people theory, was a big part of the dissent in Force v. Facebook. And it is arguably something that is truly different about the Anti-terrorism Act context versus some other torts kind of, sort of, maybe. So I wasn't really sure why he made that concession.

Evelyn Douek:

Yeah. So if Justice Thomas' and Justice Alito's skepticism on the one hand was surprising from me, another surprise from the day I think was Justice Jackson when questioning Google's lawyer. You are nodding at that. So could you describe what was surprising and what you heard in that line of questioning?

Daphne Keller:

Well, she seemed really interested in the idea that lower courts, meaning the courts she just came, from have been getting 230 wrong all this time and that maybe it is in fact a much narrower statute than courts have interpreted it to be. Maybe, I think she was suggesting that there would be drawing a distinction between publisher and distributor liability, such that with knowledge platforms would lose immunity if they know about unlawful content.

You're nodding. Yeah.

Evelyn Douek:

Right.

Daphne Keller:

And she was quite forceful on that and pushed the platform's lawyer pretty hard. And then on the flip side of that, when she was talking to the plaintiff's lawyer, she had a similar line of inquiry, but she said something like, "Of course this only helps you if you can come up with a workable distinction between what is a recommendation and what is publishing." So even for her, it seemed like she was stuck on that. But I had thought of Clarence Thomas as the man who was walking into the room ready to reread to the basics of 230, and that made it sound like she might be in that camp also.

Evelyn Douek:

Right. Yeah. She was really leaning hard on this good Samaritan provision, this good Samaritan idea that the point of section 230 was really to protect platforms and encourage them to take down harmful content and that it shouldn't apply to platforms that are acting in some sort of bad faith. And as you say, that is not the interpretation of the statute that has prevailed.

Daphne Keller:

Yeah, I mean, I think a lot of the justices were troubled by the idea that a platform might act in bad faith and still get immunity. And so many of them had questions to counsel on both sides that were along the lines of, well, what if you did design an algorithm that was deliberately intended to amplify ISIS? What if-

Evelyn Douek:

The pro-ISIS algorithm that Justice Kagan put to council, and the other one was Justice Sotomayor's hypothetical about a dating app that had had an intentionally discriminatory algorithm that matched only black people with black people and et cetera?

Daphne Keller:

Yeah, although I think those are two distinct theories. So one is the theory where if you take ISIS content and you deliberately try to push that bad and dangerous content to users, then you lose immunity. And on the platform's theory, they would not lose immunity then because the harm still comes from the content. The starting point is that the content is harmful versus the... I think the dating site hypothetical was not really a hypothetical at all. It was about Facebook and the cases alleging racially and gender and sexual orientation discriminatory targeting of housing, employment, and credit ads.

And as we've talked about before, there was a real concern in the briefs from a lot of the more progressive organizations to identify a way that's not immunized. You can't take a perfectly lawful housing ad and then target it in a prohibited, illegal, racially discriminatory way and then plead 230 immunity because the gravamen of the harm comes from the ranking and the targeting. So I think that is different from the ISIS example because of the difference in whether the content was harmful in the first place.

Evelyn Douek:

Fair enough. One other thing that maybe surprised me, I'm curious to get your reaction to is Google's lawyer, the limits on their argument, the Google's lawyer was willing to accept. They took a much less strong position perhaps than I expected. She talked much more about a continuum between sort of hosting and recommending, and there was this exchange with Justice Jackson about if you had a featured video section on your website and it said featured and you put a video there for a week and left it up, would that count as recommending? And the lawyer seemed really hesitant to say one way or the other and sort of said, "Well, that depends on whether you see that as an endorsement or a recommendation, which was something that I was a little bit surprised about seeing her concede. But I'm curious what you made of that part of the argument.

Daphne Keller:

Yeah, it was really interesting. I mean, I do think that it is wise of them to not stake out a super hardliner position. It was very, I think, good for their side that they walked in prepared to identify some things that are outside of 230 immunity on their reading. I think they appropriately threw Facebook under the bus and identified the racial targeting of housing ads as outside of the 230 immunity. They made seemingly a concession about immunity for ads payments that kind of surprised me. If I understood that right it suggests that there's no more 230 immunity for ads overall. I feel like I need to dig into that more because that would be a big concession, but maybe they made it. Then on the thing of whether a staff pick or something could cross the line into defeating immunity, that's kind of a weird one.

It seemed like maybe the goal was for justices who are asking themselves what if a platform has bad intent, et cetera to say, "Yeah, they're there for you guys, there is some way that platforms get punished and these hypotheticals that you're spinning out." But I think a win on those grounds is not a great platform win if it leaves this opportunity for any plaintiff to say, "Oh, not only is my allegation based on recommendations, but it's also based on the thing that's on the other side of the line," that the platforms or that Google rather seem to be inviting the Supreme Court to draw there. So that surprised me. I think a very cynical take, and I'm an Google lawyer so I won't adopt this very cynical take, but a very cynical take with every major platform concession is, "Oh, is this a rule that you could survive financially and your smaller competitors could not survive because they can't litigate all these cases or they can't build the tools to hold the line that you've identified." So I imagine that among people more cynical than myself, speculation along those lines is going on.

Evelyn Douek:

Right. You're not endorsing that take just raising it, just saying, "It's here, it's sitting on this table if you're interested," we're recommending it. Yeah, no, definitely not. Yeah, I was thinking they were just throwing TikTok under the bus because TikTok basically is nothing but a for you feed and it's like, ut-oh.

Daphne Keller:

Good point. Not just smaller platforms that could be thrown under the bus.

Evelyn Douek:

Exactly. It's also that other platform you really hate. You want to get them go for it. Any other really surprising moments that sort of stand out to you? I mean, it was one thing that was surprising was it went for nearly three hours. It was two hours and 45 minutes. This was scheduled to be 70 minutes of argument. But I think in keeping with the idea that the justices really were trying to grapple with the details, they were really trying to get into what rule could we craft in this case that they spent a lot of time searching for detailed description, but anything else that sort of particularly stood out to you?

Daphne Keller:

So I was on the lookout for anything that seemed like a reference to the likely pending NetChoice cases and the question of whether ranking is First Amendment protected speech on the platform's part at the same time that it is also immunized conduct under 230. And both Kavanaugh and Gorsuch at one point alluded to the idea of ranking being platform speech, but not in a way that really previewed anything about those cases. But it did make me wonder if they're thinking about them already.

Evelyn Douek:

I'm sure they are. And I was wondering whether the fact that those cases are coming down the pipeline, the fact that there's another way to get platforms makes justices feel a little bit like, "We don't need to do anything here in this particularly tricky area because we're going to have a second bite at the apple." I don't know, that's a cynical take that I am also not recommending or endorsing, just pointing out as sitting on the table.

And of course we have Taamneh tomorrow. So I mean, my bet, I'm not a betting person, but if I was, I would say something like this scared the justices in terms of the difficulty of drawing a line, and now I think it is much more likely that tomorrow in Taamneh they will decide something like, because we've talked about this on this podcast before, but finding liability in a fact set like Taamneh, and we can talk about this again after argument at some point would be an extremely broad basis of liability. So it seems very unlikely that they're going to do that. And if they find that there's no basis for liability in Taamneh, then they could say, "Oh, well Gonzales was in-confidently granted, we don't have to decide the section 230 issue because there's no underlying liability.

So that is now something that may have been a bit of a dark horse is now probably my most likely outcome. But I'm wondering if you have any other predictions or takeaways?

Daphne Keller:

Yeah, so I heard a lot of skepticism about the underlying claim in Taamneh today. I also heard justices connecting it to the algorithm question here in a way I hadn't quite anticipated. I think it was Clarence Thomas said, "I don't understand how a neutral algorithm could support aiding and abetting liability," so that there was this line they were drawing from the algorithm question here to the elements of the aiding and abetting liability, which will be interesting to maybe hear more about tomorrow.

Evelyn Douek:

Excellent. Anything else before we wrap?

Daphne Keller:

I'm so tired, Evelyn.

Evelyn Douek:

Yeah, yeah. What was it like actually being in the courtroom? Was it exciting, exhilarating, adrenaline inducing?

Daphne Keller:

It was remarkable. It is the culmination of a lot, a lot, a lot of thought and work on these exact issues over the years. And it was, I think as a teacher of this who sort of watched students struggle to get up to speed and then master it and then ask increasingly sophisticated questions, I felt pretty happy that the court was asking the more sophisticated questions already.

Evelyn Douek:

Yeah, I have to say on that point, I was a little disappointed in the quality of the advocacy, especially from petitioners, who knows what was going on there.

One thing that has been floating around is that many lawyers were conflicted out of being the lawyers for the other side, opposing platforms, because the tech companies have hired so many appellate lawyers over the years. So that might have been one reason Bloomberg has an article about that.

But it does feel a little bit of a letdown to have spent all of these years, all of this time, all of these hours, all of these podcasts, talking about these cases, and then to see it go up and the advocates struggling with some of the most basic and frankly, really predictable, questions about line drawing between recommendations and search engines and things like that.

Daphne Keller:

Yeah, I want to see the numbers on that idea that the good lawyers were conflicted out because in copyright cases against platforms, they managed to find very skilled counsel at the appellate level.

Evelyn Douek:

Yeah.

Daphne Keller:

So, I don't know.

Evelyn Douek:

I'm also skeptical, so I would read 5,000 words on how this argument actually ended up going this way, if any reporter is looking for an interesting story.

And with that, we will have to reconvene at some point in the future to talk about what happens once we hear argument tomorrow in the second half of the Super Bowl. But for now, this has been Moderated Content. This show is available in all the usual places, including Apple Podcasts and Spotify and transcripts are available at law.stanford.edu/moderatedcontent.

The show is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu and Rob Huffman. See you soon.