Moderated Content

MC Weekly Update 8/8: 11 Dimensional Free Speech Theory

Episode Summary

Alex and Evelyn discuss Ex-Twitter's latest examples of "free speech absolutism"; Apple removing an independent Russian media outlet's podcast from its podcast app; the Cambodian Prime Minister's return to Facebook; TikTok's new For EU measures; skyrocketing demand for Perspective API to moderate LLM hate speech; the reasons the dismissal of a First Amendment challenge to Utah's age verification law is so scary; and your weekly dose of random sports news.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

X-Twitter Corner

Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Doic:

Alex, my condolences. I don't know if you saw the big news over the weekend.

Alex Stamos:

I did. I watched a replay. Yes, unfortunately.

Evelyn Doic:

The US women's team was knocked out in a shock loss to Sweden of the Women's World Cup on penalty kicks.

Alex Stamos:

Penalty kicks is the absolutely worst way to solve any problem. I would not use that as a mechanism to figure out who owes five bucks when buying each other lunch, right? As a way to decide the champions in a international sporting competition, it is a completely ridiculous, ridiculous standard. My dad was a soccer football ref for years, and his idea was you do extra time and you start pulling people from the field, starting with the goalie. So you just go from 11, to 10, to nine to a side until somebody scores, which I think would be way better and at least use the same skills as penalty kicks, which is just a total crapshoot.

Evelyn Doic:

Yeah, this is a really hard loss. I feel sad for the listeners that weren't watching. And if you are to believe the right-wing news cycle, their wokeness is to blame.

Alex Stamos:

That's right. Back when female athletes used to be incredibly conservative, I remember in the 1990s-

Evelyn Doic:

Yeah, but then those hair chemicals from the hair dye just starts to seep in and it's all over for you. Welcome to Moderated Content's weekly, slightly random, in the raw, comprehensive news update from the world of trust and safety, with myself, Evelyn Doic, and Alex Stamos. We are headed straight to our X, Twitter corner. Okay, so in a follow-up to the story we were talking about last week, where Twitter was threatening to sue the Center for Countering Digital Hate in a ridiculous letter, it has since followed through on that threat and sued the CCDH for a bunch of things, not the LANAM Act claim, which it mentioned in its letter, the threat last week. Because as we discussed, that was a ridiculous claim.

So they've come through with a bunch of other claims around breach of contract, scraping of the website, and CFAA violations. I don't really know what to say here, except to say that this is just ridiculous that this supposedly free speech absolutist is still trying to shut up the Civil Society Organization that is expressing its views about how the digital public sphere should be managed. Whether we agree with them or not, or whether we agree with their methodology, that is certainly something that they're allowed to do.

If you read the lawsuit, I don't have a lot to say about it, except that it's essentially a defamation claim dressed up as a bunch of other claims, because all of the harm that they're alleging flows from the things that CCDH said about Twitter, or X, as a result of the research that they did and saying that costs Twitter things like advertising revenue and things like that. And I'm absolutely sure that CCDH'S reports that cost Twitter advertising revenue, and not for example, the face of the platform tweeting out outrageous things and personally reinstating Kanye's account after him tweeting swastikas. So, yeah, I don't really know what to say about that.

Alex Stamos:

Yeah. I mean, substantively is still a bad idea. Just last week, we had a great interview with real researchers who demonstrate that if you look at data that's provided by platforms, you will come up with complicated, nuanced answers to questions like, "Does social media ruin the world?" Much more complicated and nuanced answers than you will read in most of the mainstream media is what you're going to hear people do research and cutting off folks and then suing them is ridiculous. Also, Elon Musk keeps on using the term free speech. I do not think it means what he thinks it means. It is just... Yeah.

Evelyn Doic:

It's very nice.

Alex Stamos:

You just can't make any of the arguments he's made. I don't know. I mean, I don't know how many times on this podcast we're talking about like, "Oh, Elon Musk was hypocritical," but this one's just amazingly bad that in any situation where somebody criticizes him or his people or his company that he reaches for these legal arguments. And neither you or I are CFAA experts, but our colleague, Rihanna, is and this fall she'll be teaching the hack lab class with me and she's going to do a whole lecture on CFAA.

And one thing she'll talk about is that, over and over again, over the last several of years, first the district courts and then the appellate courts and even the Supreme Court have narrowed the use of the civil CFAA to very limited grounds. And so this feels like a flashback of how CFAA used to be used back in the day. I do not think it is compatible with the current understanding the courts have of whether or not scraping can be prosecuted in this way.

Evelyn Doic:

Yeah. So it'll be one to watch and it may unintentionally have these broader impacts and effects on how does CFAA claims pan out in these cases and the impact that they may have on this kind of research and public speech around what's happening on platforms. So, yes, if this actually eventuates, that's what we'll be watching. Speaking of lawsuits, this one's actually a pleasant surprise. So we talked a few weeks ago about an Indian court order that held that Twitter had not been compliant with federal government orders to remove content and accounts in India, especially around the farmers' protests and complaints around the pandemic.

And this was a Dorsey era lawsuit that Twitter had brought challenging those government orders and the court had upheld the orders in a pretty Draconian and extreme reading of what the law would allow. And we talked about at the time that this was the kind of thing that Twitter used to do, standing up to India in the pre-Musk era, and would we see an appeal against this really quite broad and shocking ruling? And we have this week, so Twitter has appealed the order, saying that if this is upheld, it could embolden New Delhi to block more content and broaden the scope for the censorship. So that's a really pleasant surprise. My question is, does Musk know about this or is this one of those situations where, because his attention span is not exactly the longest, that he probably doesn't know most of what's happening within his company, maybe this is not a Musk thing, but we will have to wait and see how it pans out.

Alex Stamos:

Yeah, it's hard to tell. I mean, certainly recently the decision was made, but it's also the fact that Musk has traveled to India, that there was more discussions with the Indian government about Tesla opening up the Indian market in which Tesla is currently locked out of, made me feel that this is probably not a decision by him, but maybe it is. Maybe he has decided that he's a free speech absolutist again, although it seems a little bit incompatible with also then threatening researchers in the United States to try to appeal the equivalent kinds of threats from the government in India, but you know what? Maybe he has a very nuanced opinion of what free speech means. Clearly, it's at least nuanced.

Evelyn Doic:

It's galaxy brain. Our tiny little minds cannot comprehend this level of sophistication.

Alex Stamos:

Right. We are seeing the three-dimensional projection of an 11-dimensional idea of free speech that normal mortals like us, when it impinges upon four-dimensional space-time, can only be seen as a tiny little hypocritical corner, but we don't understand the vast galaxy that is he who contains multitudes.

Evelyn Doic:

Yeah. Pity my students, that they have to learn such a simplified understanding of free speech from me and not from these legends.

Alex Stamos:

The true Stanford Law scandal is that you do not understand 11-dimensional free speech. I think now we're going to need an 11 dimension free speech sound. It's going to have to be like a Star Trek sound or something, but yes, it is too bad that Stanford Law doesn't properly educate 1Ls on the true meaning of the First Amendment.

Evelyn Doic:

Yeah, this is the true Stanford free speech scandal, for sure. Okay, speaking of authoritarian governments, just moving right along and breezing past that, authoritarian governments and demands to remove content, you have a story that you want to talk about. It's I think a really important story and it's flying under the radar out of Russia. So why don't you tell us what's going on?

Alex Stamos:

So, as we've discussed before, of all the major Silicon Valley companies, the one that has avoided most of the interesting trust and safety, slash, content moderation scandals has been Apple, because their business is generally not selling advertising or showing people social media posts. Their business is selling multi-thousand dollars slabs of glass in Silicon that you pay for. And that turns out to be a really good business, a better business than selling ads in lots of cases, but there are a couple of exceptions in which Apple is making a decision about what is platformed or not. And one of those particular examples is their podcast app, which is much bigger than just the Apple Podcast app. I think people don't understand, in the podcast world, Apple hosting a podcast, because of the way they provide open APIs to lots of folks, that then flows down into lots of other apps. So like the Overcast app, which is what I use which I totally recommend, by Marco-

Evelyn Doic:

Seconded.

Alex Stamos:

Yeah, Marco Arment makes that app and they run the Accidental Tech Podcast, which is about 400 times the size of the audience of this podcast. So, yeah, go see ATP and use code Alex and Evelyn to go buy a T-shirt from this huge podcast, but they and most of the third party popular podcast app, other than Spotify and such, use the Apple Index. And so whether Apple decides to have a podcast up or not is a huge, huge deal. And on August 5th, just a couple of nights ago, the very popular podcast, What Happened by Meduza, which is one of the few independent Russian media org, so independent that they had to leave Russia effectively, they operate from outside the borders of the Russian Federation, but they are mostly Russian speakers and mostly Russian people of Russian descent, Meduza's podcast got taken down off of Apple Podcasts, which seemed to have been in response to a demand by Roskomnadzor, which is the censorship authority, basically the media authority of Russia to demand.

So Meduza publicly complained about this, and a couple days later, it was put back up, or I think the next day it was put back up. So just another good indication that, despite their attempts to stay out of the fray, Apple does get pulled into this. And another situation, perhaps equivalent to what we see at Twitter, which is that you have these large companies where you have lots of people independently making decisions, perhaps even regional decisions, and the decisions being made by people might be politically involved or have some preferences here. And then when it gets to headquarters, it only does that because of a scandal.

And so it's not clear what Apple corporate, Cupertino's policy is here, but it should be another warning for Apple about to be careful here, but the other thing that was surprising in doing research for this, is that Apple's response rate to Russian data requests is actually incredibly high. In the last period we reported, which is the last half of 2021, unfortunately they've not updated their transparency report, it was 85% for device information, which I believe means iCloud backups. So that is an incredibly high response rate for requests from Russia. And so there might be a deeper underlying issue here of Apple's repeating the kind of support of authoritarianism that we see in China, of them repeating that in Russia. So I think this is something to keep an eye on.

Evelyn Doic:

And, of course, the tried and true content moderation appeal mechanism is not to just keep appealing within the app, but to try and get public attention to it. And that is how eventually, in these circumstances, your most likely way of success of getting the content reinstated, but it is true. And it's remarkable how a lot of this stuff flies under the radar. We've talked about Apple's other main area of content moderation is the App Store and how lax a lot of the standards are there and the lack of transparency that's happening, both in podcasts and in the Apple App Store, where they're just adopting the YouTube approach of keeping their heads down, not explaining, and generally that's very effective.

I think another good example of where public attention is one of the key vectors of how platforms respond to content moderation controversies, let's head to Cambodia. Listeners of this show might remember that around a month ago, the Meta Oversight Board recommended to Meta that the Cambodian Prime Minister, Hun Sen, be suspended from Meta's platforms for six months. They had heard a case about a specific video in which he was threatening violence against his political opponents, but because of the severity of the threat in that particular video and because of the Prime Minister's history of committing human rights violations and intimidating political opponents and his strategic use of social media specifically to amplify such threats, the board recommended to Meta that they suspend his account for six months entirely.

Now, the board's order to remove the particular video was binding, but the recommendation to suspend the account as a whole was optional, and all Meta has to do is to respond to that recommendation, and they still haven't. It's about a month ago now, and they have three weeks left or so to respond. In the meantime, Hun Sen took all of this very well. He immediately threatened to leave Facebook and block it within the country, but in an extremely relatable move, his commitment to quitting social media didn't last all that long, and he's back and posting this week.

And in an especially gutsy move, he tested Meta's fortitude by reposting the same video, the same violence-inciting video that got him the ban recommendation from the Oversight Board, which Meta did remove, but they still have left the account up and there's no visible repercussions for reposting it. And this is, I just want to contrast, this whole thing is playing out and basically no one... I have seen barely any press coverage of this.

Shout out to Rest of World, a tech publication that is specifically dedicated to covering stories that are happening not in the West, and for writing about this and trying to get Meta to comment, which they didn't, but contrast this story with all of the hullabaloo around President Trump's account and what Meta's decision was going to be about that specific account and blah, blah, blah, blah, blah, a circus that we participated in, for sure, I've seen barely any coverage of this. And I think it shows both the limits of the board's power in terms of just not getting Meta to respond at all so far. In the meantime, an election has happened or a, quote, unquote, "election" has happened in the country, but it shows that without any sort of media attention and things like that, these platforms often just don't respond to very real threats that are happening.

Alex Stamos:

Yeah, and this is a big one, 'cause it's not just about Cambodia, although the impact on Cambodia is a big deal here, but it is, once again, demonstration that when you talk about free speech in an authoritarian state where there's a very media-savvy authoritarian on the top, then sometimes the decision to reduce the amount of speech actually increases speech overall in aggregate. And that is definitely true in this case, where Hun Sen likes to use his Facebook account to attack anybody who criticizes him and to try to drive violence their way and to incite violence their way, which in the long run reduces the ability of normal people in Cambodia to have their voices heard.

And so I do think this is... I wish people were talking more about it, because I think it is an extreme, but therefore, because an extremist, it's easier for us to see the contours of the challenges here. It is an extreme example of what we've been learning over years, which is sometimes you have to reduce the spread of content from really big voices who have the ability to drive violence against people they disagree with, whether those folks are in authoritarian states or sometimes in democratic states.

And that's one of the great ironies and challenges of the age we're living to. It's just the reality and it's a reality that a number of people have denied and are trying to weaponize, at least in the US and European context. So, yes, I do wish there's more discussion of it, because this is actually one of the more important content moderation fights that is happening in the world if you consider the history of Cambodia for political violence and the long history there of really terrible things happening, when you have incitement against political enemies or racial and religious minorities.

Evelyn Doic:

Right, and it just obviously raises the very clear contrast between the content moderation that platforms do in the West, where they get lots of scrutiny and lots of press attention, and in other parts of the world where they can get away with things. So, prove us wrong, Meta. We still have three weeks to hear what their response is, and maybe it's taking so long because they're going to deliver a 16-dimensional understanding of freedom of speech in their response, which I look forward to reading.

Okay, over to Europe. So this week, TikTok announced a number of new measures that it is rolling out in the EU in order to comply with the Digital Services Act, which comes into effect for major platforms at the end of this month. As a reminder, a few weeks ago, European Union Internal Market Commissioner Thierry Breton and his team conducted one of these so-called stress tests that they've been doing, this roadshow that they've been doing at various platforms. They did one at TikTok's Dublin offices and gave them a fail grade. And so as a result, I guess TikTok has announced a bunch of things that it's doing.

So this is, I guess, the DSA gets results in a way, but looking at what the measures are, I think it's interesting, and especially in light of the conversation that we had last week with Josh Tucker and Jen Pan about the studies that they did on Meta. So one of the measures that TikTok is rolling out for users in the EU will be the ability to turn off personalization. And this means that their For You and live feeds will just show popular videos from the area and around the world, rather than recommending content to them based on their interests. And their following and friends feed will continue to show the people that they follow, but it'll be in chronological order, rather than based on the things that they like. So first of all, I'm sure that lots of TikTok users will definitely use the For EU feed, rather than the For You feed, because if there's anything that we know about TikTok, it's that the algorithm is really, at best, an optional extra to what makes it attractive to users.

Alex Stamos:

Yeah. If you didn't hear the sarcasm in Evelyn's voice, I think both of us agree that one of the things that's really driven TikTok is the quality of the algorithm. So, what, the idea people are going to opt in, if you're an active TikTok user, you're going to say, "No, all of a sudden I want..." I don't even know what a chronological feed on TikTok looks like. So this will be fascinating to bounce off of a European VPN to see how incredibly bad that this, it's probably going to be absolutely terrible.

Like you pointed out last week, we talked about real empirical evidence that there are all kinds of challenges around turning off, if you turn off programmatic feeds, a bunch of bad things actually happen, and a bunch of the good things that you expect don't actually materialize. And so you have to ask, what is the goal of the EU requiring this? Is the goal to help privacy? Well, you don't actually... TikTok doesn't know less about you because there's no personalization. There's this bizarre privacy religious war that occurs from people, where they think like, "Oh, well, if you turn off ads, my privacy is better."

It's like the platform that is showing you the ads still knows everything about you. There's no actual increase in your privacy. It's just they're not using it to make money, which it becomes more of a moral point of, "Oh, well, you shouldn't be able to use this data to personalize. You shouldn't use this data for ads." Okay, if you want to make a moral point, that's fine, but you're trying to make a substantive point of it actually changes the privacy of people and it does not.

And this is another one where, if they think turning off personalization is actually going to make TikTok better, they're just going to be wrong. And so it is unfortunate that the EU has bought into these the last round of cultural worship liths around social media, which mostly came from the left, to be honest. This is all switched to being from the right, but a lot of the post 2016 criticism that was not empirical, that was not based upon evidence, that was what you might call the New York Times consensus about the problems of social media and that have not stood up to over and over again real empirical scientific study, that those have now been enshrined in the EU.

So, congratulations, Europe. You have laws that are not based upon anything real. So I guess at least they pass laws. Here in the United States, we'll just do nothing until the country goes bankrupt and we can't pass bills and everything falls apart, the hottest summer ever. So I guess we should congratulate Europe on being able to actually pass laws, but it would be great if it was based upon real evidence and real study and not based upon the assumptions that you hear either in the New York Times or from books written by certain Harvard professors who have no idea how the internet works.

Evelyn Doic:

Yeah. So, I mean, the conversation that we had last week with Jen Pan talking us through some of the findings about the chronological feed and the counterintuitive things that yes, okay, it did reduce people's time on platform, but it didn't reduce polarization or political knowledge, and it did increase the amount of political and untrustworthy content that they saw. Now, obviously, those are based on the very particular algorithms on Facebook and for those particular findings, and so they're not directly applicable to TikTok, but my main takeaway from that conversation was this is really complicated and there's all these weird unexpected trade-offs that we really need to understand before we just do these things that we think are going to be silver bullets and solve everything without any empirical foundation, exactly as you say.

And so it really does lend the idea that what's going on here is we have this roadshow about the DSA rollout and, "Oh, this company passed the stress test. This company failed the stress test. And, oh, look at the results we got. We got them to introduce a chronological feed, 'cause we all know that that's going to fix everything." And it's all about headlines, rather than where is going to be the empirical demonstrated evidence of how this piece of legislation is actually improving the speech ecosystem in these countries, which is a super interesting, super important, super difficult question, but that's not what we're focusing on. We're focusing on these headlines and these blog posts of companies saying, "Yes, yes. We are doing what you tell us to do."

Alex Stamos:

Yeah. I mean, the European idea is like, okay, social media is bad, so if we make social media worse, then things will be better. And so you have these laws like this. So if we make it hard for them to make money and we make it so that people don't want to use the product, then somehow long-running issues in European society will suddenly be fixed. So, okay, good luck with that. The TikTok outgoing feed's gone, so that's it. No more. There'll never be a protest in France again. Yeah.

Evelyn Doic:

Right. Yeah, exactly.

Alex Stamos:

Right. Robespierre, the real problem was that he had algorithm feed.

Evelyn Doic:

TikTok algorithm, exactly, the For You feed. If there'd only been the For EU feed back then, the world would be a very different place. Okay, so speaking of tick the box content moderation compliance, let's segue to a story in Fast Company this week about Perspective API, which I thought was really interesting, and shout out to UL Roth who skeeted, I guess is the word, about this on Bluesky and made some good points about it that brought it to my attention.

So, Google, who owns Jigsaw, that created the Perspective API, said that demand for it has skyrocketed as large language model builders are turning to it as their solution for content moderation problems. So Perspective API is an open source tool that people can use to detect primarily hate speech. And, apparently, lots of these LLMs and chatbots, et cetera, are turning to it to try and make their products less toxic. The problem, of course, is that Perspective is a very blunt tool with lots of well-documented weaknesses, including high false positives and bias. It can be easily fooled.

There's some pretty famous research showing that it's more likely to label nontoxic, more likely to label toxic posts from people with disabilities or Black users or tweets from drag queens and have all of these biases built in. And it's just not clear that any of these companies that are picking up this product and using it are building in the safeguards that you'd need to correct for that. So it can be useful to have this automated tool doing this first run, and then if you have a whole bunch of human reviewers going through and correcting the errors, maybe that'd be good, but the reporting in this article shows that there's real reasons to doubt that anyone's building in those safeguards.

Alex Stamos:

Yeah. So this was a very odd article for me, that people were actually using Perspective API in a production purpose. So I've always seen this, that Jigsaw is like a think tank inside of Google. They have great people, they do really interesting work, but I always saw this as an experiment to put out. It was never really like a production quality service. And I think there's a couple of problems with it. One, it's honestly not that great. U Hall talks about this a little bit, but it's got ideas of toxicity and such, which are interesting in some ways, but not directly tied to specific trusted safety policies for specific platforms.

I think there is a future for APIs and service providers to provide trusted safety as a scanning, as a service. I think there's absolutely a goal there, but the companies that are going to be successful here are going to have the ability to retrain the models based upon your specific rules for your platform, and then to have a feedback loop of what you think is and is not violating. And I know of a number of other companies that are doing this kind of AI stuff who do that. You cannot do that with Perspective API.

And so as a result, it will say like, "Oh, I think this is toxic or not," but the things that come out that are toxic or not is not great. And I know that, because every year I teach this class, Trust and Safety Engineering, and our students have to come up with a trust and safety project and they're supposed to use some kind of scanning system to do it. And these days now, they almost always build their own AI. They train their own models locally or they use large language models, but a couple of years ago, one of the only APIs that was available to them was Perspective, and they'd use it and they test it.

And one of the things we made them do was come up with tests that they believe would create both false positives and false negatives, and they found a lot. And these are just students with 10 weeks of experience. So, yes, I don't think Perspective should be used in a production context. If you're looking for somebody to help you out, there are smaller commercial companies who are focused on providing trust and safety scanning solutions who will customize it for what you want and will allow you to have a back and forth and feedback into your API. That is a much better direction than using a general API like this, that just has these weird things like toxicity and meanness and other kinds of standards that don't directly apply to the trusted safety rules of any platform I know of.

Evelyn Doic:

Right, and I guess it just becomes a place, a situation where you feel like if content moderation fundamentally is brand protection, if you are using what is becoming somewhat of an industry standard, "Oh, it's what everyone's using," then that is like, "Oh, yes, we're doing our due diligence in this situation," where that's actually just not true in reality. So, yeah, it's a tick the box compliance measure potentially that's going to cause all of these problems and just magnify these well-known biases across a bunch of services.

All right, so to the legal corner. Thank you. So this one's pretty scary actually and didn't get a lot of coverage this week, but we've talked before about Utah's age verification law that was requiring adult websites to verify basically the age of their users and not serve content harmful to minors. We saw the Free Speech Coalition bring a First Amendment challenge to that law and they have very strong precedent behind them. We've seen these laws before and they've been successfully challenged on First Amendment grounds as putting an undue burden on adult speech and chilling a whole bunch of lawful speech without being narrowly tailored to the extent of the harm here.

And this challenge was dismissed this week, a court holding that the law, because the mechanism of enforcement of the law outsources the enforcement to private parties who can bring a suit against the porn sites, rather than state officials, rather than the AG or whatever, they can't be a pre-enforcement challenge and so they can't raise constitutional issues. So they have to wait basically until someone comes and sues them. And then in that suit, they can raise First Amendment and constitutional arguments in their defense against the lawsuit.

We have seen this mechanism of private enforcement used before. We've seen it in, famously, the Heartbeat Bill in Texas did exactly this to try and do a constitutional work-around to prevent constitutional challenges to that law, but to my mind, this just has to be wrong as a matter of First Amendment law, where the First Amendment is so concerned about chilling effects of laws. The whole point of a lot of First Amendment doctrine is to prevent laws from chilling otherwise lawful speech. We have demonstrated chilling effects of these laws in Utah. We see, for example, PornHub has blocked Utah from accessing its services.

The Free Speech Coalition has appealed, so we will see what happens on appeal, but if this survives, it's a pretty scary loophole to First Amendment scrutiny, 'cause what it means is that legislatures can pass laws that essentially chill speech because companies don't want to be facing legal suits from private companies, from private individuals, but they can't be challenged on First Amendment grounds until a private individual brings that claim. So, yeah, this one's really worrying, not just because of what it says about the age verification law in Utah, but what it says potentially about laws chilling speech much more generally.

Alex Stamos:

Yeah. I mean, I totally agree. I just want to double-check that you think this is true under 11-dimensional First Amendment analysis and not just hearing-

Evelyn Doic:

This is embarrassing, Alex. I'm stuck in my third dimension. I'm trying to get to the fourth dimension before teaching in the winter, but I don't know what happens at 10th or 11th dimensions on this one.

Alex Stamos:

We're at a critical moment. Have you seen Oppenheimer yet?

Evelyn Doic:

I have not seen it, no.

Alex Stamos:

Oh, okay.

Evelyn Doic:

You have?

Alex Stamos:

You have a sub current of, he goes to Europe to study what they called at the time the new physics, which was quantum mechanics, which made no sense to traditional physicists, including Einstein, famously, "God does not play dice" and spooky action at a distance. And so he had to go to Europe because nobody in America understood the new physics. And then he came back and established the first theoretical physics group doing that kind of work at Berkeley. And so maybe this is a chance.

Evelyn Doic:

So you're saying I should go to Europe? Is that what you're saying? Uh-huh.

Alex Stamos:

Well, I'm trying to figure out. I think you're going to go into Elon Musk's 11th dimension.

Evelyn Doic:

I see.

Alex Stamos:

And then come back and establish the first 11-dimensional First Amendment school at the Stanford Law School. I think that would definitely cement its position in the top five law schools in the country for decades to come.

Evelyn Doic:

Top five? Ouch. Look at you, down-ranking us. That one hurts. Wow. We're going to have to take that one offline. Extremely upset. I'm censoring you from making such outrageous comments about our law school.

Alex Stamos:

Ow, my rights, my rights. Vince says, "Are you going to sue me into the CFAA?"

Evelyn Doic:

That's right.

Alex Stamos:

Or maybe you should call Apple and have them take this podcast out.

Evelyn Doic:

That's another excellent option. Yeah, I'll find a Lanham Act claim. I don't know, I have some... These are all my 11th dimension options to bring or just wait here, listeners, and see what comes up by next week. And with that, this has been your Moderated Comment.

Alex Stamos:

Oh, I think we still have to do a sports update.

Evelyn Doic:

Oh, yes. Will you let me?

Alex Stamos:

Yeah, absolutely.

Evelyn Doic:

Excellent. So it's very exciting, of course, because Australia has now proceeded to the next round with a solid two nil victory against Denmark, so we're through to the quarterfinal. And Sam Kerr and the famous calf we have been talking about for weeks now got off the bench for the last 10 minutes or so in the game and seemed to be doing fine. Nothing dramatic happened, which is excellent. This is one of those situations where we didn't want anything interesting or exciting to happen with that calf on the field, so it's looking good. I'm obviously sad that the American team is out, because my dream final would've been a US, Australia showdown, but as it is, I hope that as a second best, that listeners of this episode will root for the Matildas.

Alex Stamos:

That's right. So congratulations to Australia. I feel like we have to do the sports update, 'cause it seems from our reviews and our feedback on social media that there are people who get all of their sporting news from this podcast, which I'm just going to say that is a terrible, terrible idea. That's like getting all of your cybersecurity and content moderation news from ESPN, right? I learned everything I know about current cyber events from ESPN, is just as bad idea to listen to us for sports, but that being said, I also have a sports update, a sad one. So congratulations to Matildas, but a very sad one that actually is going in the long run affect my goal of getting you to come to a US College football game, is last week the Pacific 12 Conference, the Conference of Champions, completely fell apart.

So there's a big conference realignment going around that American colleges traditionally play in these conferences that have some kind of physical proximity, so the famous being the SEC, the S and the E, Stanford Southeastern, and they're all in the South, and we had this Pacific 12 conference. It's only been the 12 for a little bit. It was the Pac-10 for a long time, which comes from the Pacific Coast Conference, which is a hundred years, a hundred years of Cal, Stanford, UCLA, USC, University of Washington, University of Oregon. And then we added Oregon State, Washington State, Arizona State, Arizona. And then it was Colorado and Utah made it the 12, and slowly starting with USC and UCLA, which I know it was USC that back-stabbed everybody else, they left to go to the Big 10, which is a Midwest conference. The closest school in the Big 10 at that point to USC and UCLA was Nebraska.

Evelyn Doic:

I'm not great at American geography, but I'm pretty sure that's quite far away.

Alex Stamos:

Right. That's not the next state over. It's not the next, next state over. It's not the next, next, next state over, right?

Evelyn Doic:

Right.

Alex Stamos:

It's a distance. And so that started and the zipper effect started, and it all came to a head last week with University of Washington, University of Oregon leaving, and now Arizona State, Arizona, Utah leaving, Colorado had already left. And so it is now there are four schools left. It is Cal, Stanford, Washington State, Oregon State, which funny enough, I got to double-check the math, but I think that it still is the conference only second to the Ivy League with the number of Nobel Prizes. So it's still pretty good, but that is not what you normally choose your football conference on. It was based upon your Nobel Prize winning physicist and economist.

Evelyn Doic:

I mean, speak for yourself.

Alex Stamos:

Yeah. And so now it's actually this is really bad for West Coast college sports. One, because it's destroyed all these rivalries, but it really leaves Cal and Stanford in this real weird position because all of these other schools are making tons of money from the TV networks and the Pac-4 now is not going to have a deal. They had a deal offer from Apple TV. It was going to be all streaming, which I thought was pretty cool. I certainly would've paid for that package, but apparently it was not enough money to keep the schools in and it's all falling apart.

And so we're facing a weird future where nobody knows what's going to happen to Cal and Stanford. And it's a big deal, because football, and to a lesser extent basketball, subsidizes all the other sports. So all of our students who we have who play sports, I guess you have less, 'cause law students generally aren't, but I have a bunch of students who are varsity athletes. If Stanford is forced into joining the Big 10 or the Big 12, they're going to be, the women's volleyball team is going to have to make it to Rutgers in New Jersey for a Thursday night match and then be back in class on Friday, and so that's just a ridiculous future.

Football teams play on weekends and they have the money to charter flights, but that's not going to happen for the smaller sports. It's definitely not going to happen for the women's sports. And then it's possible Stanford won't be in a conference. And then how is Stanford going to pay for all these other sports? Now, at least Stanford has the money. Cal, my alma mater, does not have the money. So this is actually, it's really a sad outcome because you have the two best schools that play top tier football, 'cause the Ivy League, they're all, they're technically div one, but they're not in the top tier of conferences, the two best schools in the country that play football are possibly not going to be able to do it anymore because they've been left out. So it's really a crazy thing that's going on. It's a really sad day for college sports.

Evelyn Doic:

Yeah. I mean, so otherwise, what happens? Do you just start at the semifinals if there's only four teams?

Alex Stamos:

Well, so that's actually one of the funny things. So there is a legal corner here. There's a bunch of people forensically going through the contracts between the conferences and the NCAA, which is the overall overarching organization that organizes all these collegiate sports that traditionally the champion of the Pac-12, as it used to be, would have an automatic birth in the college playoffs. So if they can keep that, then these four teams are doing great, because all you have to do is beat three other teams and all of a sudden you're in the playoffs against undefeated Auburn, right?

So it's like it seems that there might be, if you fall below six or something. So there's all these people who are looking at, "Great, all they have to do is get back above six." So you just find two, San Diego State, Fresno State, or you add two Mountain West schools or something, so there is a bunch of interesting things going on on the legal side of trying to figure out how can they best game this, but in the long run, the real problem is that unless they join one of these bigger conferences, they're not going to get a deal with either the streamers or the traditional TV networks.

This is also, to come back to the internet, something that's actually relevant to this podcast, this is a symptom of we're in this weird, weird place where college football still makes a lot of money for what people call linear TV, like the cable channels, like Fox Sports. Fox has multiple different channels, DirecTV and such, and the different ESPN. I forget how many ESPNs there are, but there might be eight, ESPN8, The Ocho. I don't know if you've seen that movie, but-

Evelyn Doic:

No, but more than there are teams in this side, in this conference, is what I'm getting at.

Alex Stamos:

Yes, right. Yes, exactly. There're definitely more ESPN channels than there are teams in the Pac-4, and they still make a lot of money, but everybody knows it's going to go away. So this is the last gasp of the contracts with the big, with ESPN and Fox. And now everybody wonders with Apple, and I mean, Netflix doesn't do sports, but Amazon does, what is the future going to be for college football? Are you going to have to buy a package for each different school? A lot of this is caused actually by the internet and the disruption the internet's brought to cable packages and traditional cable TV, of the last gas of these schools trying to grab onto money before the streaming wars disrupted in the same way it's disrupted Hollywood, which is what we're seeing with the strike.

Evelyn Doic:

Right. Well, it's that kind of cliffhanger that's going to keep our listeners to this podcast coming back for more content to see how this plays out.

Alex Stamos:

Right, because tune in here for all of your sports news. There's no need to go anywhere else if you just want to hear about college football realignment and the Women's National Soccer Team of Australia. I mean, what other sports do you care about?

Evelyn Doic:

Yeah, you just have to listen to 30 or so minutes about blah, blah, blah tech news beforehand before getting to the good stuff.

Alex Stamos:

Perfect.

Evelyn Doic:

What other better way to consume your sports? And with that, this has been your Moderated Content Weekly Update. This show is available in all the usual places, including Apple Podcasts for now and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino, Policy Analyst at the Stanford Internet Observatory, and it is produced by the wonderful Brian Peltier. Special thanks also to Justin Fu and Rob Hub. Talk to you next week.