Moderated Content

MC Weekly Update 11/15: The Big Game

Episode Summary

Alex and Evelyn talk about generative AI and elections, renewed calls to ban TikTok, more reporting about India pressuring platforms, Supreme Court argument about whether politicians can block people on social media, and apparently there's some big sportsball game this weekend?

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Label Your AI

TikTok Tick Tock

A Trip to India

Transparency Please

Legal Corner

Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos: One day when she marches in the Cal Band, all this work that we're putting into it will be worth it. If she goes and plays in the Leland Stanford Junior University marching band, then I will be...

Evelyn Douek: Disown her.

Hello and welcome to Moderated Contents Weekly, slightly random and not at all comprehensive news update, and this week, not in fact weekly either. From the world of trust and safety with myself, Evelyn Douek and Alex Stamos.

Alex Stamos: Would you call that corporate puffery?

Evelyn Douek: Right, exactly.

Alex Stamos: That we call it weekly, that we can't-

Evelyn Douek: No guarantees or promises In our introduction.

Alex Stamos: Here comes Alina Khan for us, for calling it weekly.

Evelyn Douek: Exactly. Misleading and deceptive conduct on the internet. Okay, so since we last recorded, Alex, I have lost four wisdom teeth and sensation in my chin. And you have a new role, I believe.

Alex Stamos: Yes. So I think I would take my week over yours. I apologize for, as we were talking about before, you're having trouble feeling when there's stuff on your chin. I understand that as a man with a beard, a constant problem is the soup on the chin.

Evelyn Douek: Profound new sympathy for the struggle that men have every day.

Alex Stamos: On behalf of men. We appreciate that and we accept your apology, Evelyn.

Evelyn Douek: Thank you.

Alex Stamos: I'm glad that you're listening and learning.

Evelyn Douek: Every day, one step at a time.

Alex Stamos: Yeah, so as you mentioned, I've had an interesting week and thought we should cover it for the Moderated Content listeners just to make sure everybody understands kind of what my affiliations are and can be totally appropriate in their death threats or complaints of my conflicts. Yeah, so last week people know probably that I've been teaching at Stanford since 2018. And during that time I did some consulting by myself and then in 2021 started a consulting firm with Chris Krebs, who was famously fired by Donald Trump by tweet, and our first client was SolarWinds back in January 2021. And we built up this little firm, got to about 15 people or so. We're having a good time, but it's just small business ownership is a challenge in the best of times. And while we had lots of great things that we could do, it was always just kind of a everyday waking up, needing to make payroll is a little bit exhausting.

And we had an opportunity to join a bigger company. So last week our company was purchased by SentinelOne, which is a cybersecurity company that probably most of the listeners have not heard of, which is great for this podcast. They don't do any content moderation. They have no user generated content. So unlike if it was bought by a Meta or Google or something, obviously I wouldn't be able to talk about this stuff. It is a pure cybersecurity company and we're really excited. During this entire time I've been working on instant response, I've been working with companies that get breached. I've had a bunch of situations and I really do enjoy that work. And now being part of this company's going to be great. I also get to be Chief Trust Officer, which is an interesting, emerging new kind of role inside of companies that pulls together policy compliance, privacy issues, AI risk management issues, and traditional cyber and information security.

So it's a great role, but everybody should know I'm still teaching here at Stanford. I am an unpaid adjunct and lecturer, which being unpaid makes me not paid that much more than a number of people who teach classes at major universities. And that means I don't get paid much less than other people and I'll continue to be teaching here. In fact, I'm teaching today, I'm wearing a Cal tie, as I'm sure we'll discuss for our big game week lecture. And you'll continue to hear me here on Moderated Content until it becomes completely unadvisable for the officer of a publicly traded company to be interacting with you every week.

Evelyn Douek: Excellent. Until the standard proviso, until this podcast gets us into legal trouble, we will continue. So start the countdown. Well, congratulations. That's huge news.

Alex Stamos: That's good. It's nice to be part of a big company again, got to meet all these nice people in Florida last week. They had this big conference in Boca Raton, and so it was a bizarre mixture of IT nerds in polo shirts, and then Artie and Marge from Hoboken who were down for the week. Boca Raton is just this bizarre place. The bars all close at 10 o'clock because-

Evelyn Douek: That's my kind of bar. It sounds great.

Alex Stamos: So yeah, anyway, it was great to meet people and it's cool to be part of a big company where you have that backing, but you also get to work with really competent people.

Evelyn Douek: Excellent. Well, so it has been an extremely busy couple of weeks for you because that is not the only thing that you've done in the last few weeks. You were also part of the bipartisan AI insight forum, the fifth one on elections and democracy that was hosted by Chuck Schumer. So tell us about that. What was it? What did you speak about and what were your takeaways?

Alex Stamos: Yeah, so this is the fifth forum, as you said. Schumer has been holding these closed door... They're not hearings, they are discussions where you have a bunch, about 15 to 20 experts sit in a big arch and there's generally one Dem and Schumer and one Republican, maybe two Republicans sitting there in the arch. And then you have an audience full of staffers, no media, no cameras, and then senators just walk in randomly. It's really incredibly distracting. So first the format is very strange because generally you have, we are on the dais and they're in the audience and generally, I've been in hearings before, it's the opposite. And Chuck Schumer just kind of throws out, here are things I want to talk about and then calls on specific people in some cases. In other cases, if you have something to think about, you raise your hand. Then the senator sitting in the front row can throw things out. And one, it's actually the most functional I've ever seen Congress. So there's a lot of controversy. These things are closed doors. There's no media.

Evelyn Douek: Low bar, but okay,

Alex Stamos: Yeah, well, there's no media, there's no cameras. And that unbelievably improves the quality of the questions, the quality of the interactions, the fact that there was no political posturing. It was a bipartisan discussion the entire time. Now obviously the Senate always has a little bit more bipartisan comedy and you didn't have Tuberville or Cruz or some of the people who were more firebrands show up, but you had an equal number of Republicans and Democrats. All of them were really worried about AI, in this case it was specifically about the impact of AI on democracy and elections. And they asked great questions and we really got into it. And so it was great, and I've got to say, senators are so much better when they're not on C-SPAN and they're not trying to do a 15-second sound bite that they can fundraise off of or they can get on MSNBC or Fox News.

And so I thought it was great. They asked great questions. For me, there's people there from the big companies and they're talking about what they're doing from a protection perspective. There's a lot of talk about content verification. There are a number of people from, there was a Secretary of State of Michigan and there's the lieutenant governor of Utah. So again, a bipartisan, two women who run their local elections, both talking about the challenges they're facing as state officials to deal with these problems and how much help they need from the federal government. My focus is really on the adversarial side, and so a lot of what I talked about is one of my fears with generative AI is it allows each of the different kind of troll farms to move up in the Premier League rankings. They all get to step up into the next league up.

And so you can think of a terrorist group like Hamas can be as good as the Iranians were a couple of years ago, and the Iranians can get as good as the Russians were and the Russians and Chinese go into a whole new stratosphere because now you can generate content easily, cheaply, in a language that your people do not speak as a native language necessarily. And it completely changes the economics. And that was what I was trying to get them to understand was there's all this talk about specific deepfakes and everybody's just thinking there's a deepfake of Joe Biden or a deepfake of Donald Trump. But in those situations, every media outlet would be looking at it, verifying it, having experts on and stuff. What really worries me is the economics of not necessarily individual deepfakes as much as you no longer... To make a hundred thousand pieces of content in the 2016 election, Yevgeny Prigozhin, God rest his soul. Poor man.

It's amazing how unreliable planes are in Russia sometimes, just shocking. Windows and planes are just really dangerous things in Russia. Yevgeny Prigozhin was a billionaire and could afford to fill a building full of young people in Russia to go create content. Now if you've got a couple of people who just read English well enough to edit it and to prompt, you could do the exact same thing without all of those people. And so I think the economics is a big deal for me and the thing that really worries me there is then local state elections, house elections. Is that a deepfake about a presidential candidate is not going to have any real effect, but in a situation where you have a member of the house who is winning by three, four thousand votes, content that's either being pushed locally or even deepfakes is going to be a big deal because with the hollowing out of local media, the chance that somebody catches that and figures it out on Nextdoor or whatever, not that great.

And so I made that point. There was a lot of great questions from both Democrats and Republicans, like Senator Young, I was very impressed by him. Senator Schumer was very engaged, for a guy his age is really with it from AI. There's a lot of Brooklyn, New Jersey jokes being made. It was great. It was a great thing and one of our students was in the audience as a staffer, which is just amazing to see. She came up to give me a hug and I'm like, first I was like, "Who is this?" And I'm like, "Oh, I know. Oh my God," it's great. So it's just incredible too to see our students go out and have real impact in the world.

Evelyn Douek: Amazing and rare to hear so much optimism and positivity coming out of a visit to the hill.

Alex Stamos: Yeah, I mean nothing's going to happen.

Evelyn Douek: Right, that was going to be my next question.

Alex Stamos: The meeting was great.

Evelyn Douek: Unfortunately they leave the room, the cameras turn on and all of the rest of the stuff, resumes I suppose.

Alex Stamos: I mean, I think from the experts there was a push for bipartisan... A lot of this stuff comes back to the fact that we don't have a privacy law. So especially when you talk about the use of PII and such. Something I was talking about was more explicit defamation rules around not just for politicians, but for any normal person if you have AI create fakes about you. A lot of people were supportive of PATA, the Platform Accountability and Transparency Act, just so we can know what's going on. That doesn't stop AI based disinformation, but at least you can have some kind of transparency and maybe somebody will point out that something's fake.

So there definitely are bills out there, the written statements are available and we can link to mine as well as all the other ones. And there are some great ideas. Congress can't even confirm four star generals right now, or it looks like we're barely keeping the government open, so the ability for the group to act on these incredibly complex issues is certainly suspect. But I was hopeful and at least there are things that Republicans and Democrats agree on behind closed doors, and so it was nice to see that.

Evelyn Douek: Great. Okay, so we're going to be left in the same situation that we've always been in, which is that Congress isn't really going to act and that we're going to be in the hands of these private companies and what are the rules and norms that they're going to develop in dealing with a lot of this stuff. And we have seen in the past week, apparently it's generative AI policy rollout week, and we're starting with Meta, which released its new policy this week.

It's saying that starting in the new year, Meta's going to require political ads to disclose if they use AI in their ads in a material way to depict a person looking as if they're saying or doing something that they didn't do, et cetera. This feels like a step forward I guess, but just keep in mind that meanwhile, politicians ads, these same ads aren't fact checked on Facebook and there was a story in the Wall Street Journal this morning saying that now Meta is rolling back its policies and it's going to let political ads and politicians question the legitimacy of the 2020 US presidential election in their ads as well. So as long as you don't question the legitimacy of the 2020 election with six fingers, you're good to go, is the policy.

Alex Stamos: And that, I think this is a big step back. I mean, it's great that they're starting to have these rules. I think you need a complete ban on the use of AI generation in political ads and a bunch of other ads. They had pharmaceuticals and such, I think it's all the regulated stuff. And I think you need to ban it because I don't think as humans, we are well-equipped to see an image that is photorealistic, that shows somebody doing something terrible and just because it's marked, "This is AI generated," that all of a sudden we disregard it. I think it's something that can stick with you, that even if you know, and I am really afraid of the door they've opened here. And allowing people to disregard to say the election was stolen, this is paid speech. You are paying them to amplify your speech.

From my perspective, the triangle I use all the time, this is the situation in which people have the least, I see it as the least free expression rights and that there's the most responsibility for a platform that is profiting to amplify the speech because of money. I think it's totally appropriate for them to have editorial responsibility here and say, "We're not going to, the 2020 election is not stolen. We are not going to allow you to give us money to amplify the big lie." Totally inappropriate. I think a big step back for Meta this week.

Evelyn Douek: Yeah, it's the money and it's also using their tools to powerfully target these and spread them in the most effective ways. I completely agree that the level of responsibility here is just completely different to organic content and it's a shame to see them step back. YouTube also rolled out a new policy this week on generative AI, which again gives it the big green light. Basically, it is also requiring creators to disclose when they have used AI generated content on the platform and then does reserve the right to remove certain videos in certain circumstances. And also I think this is an interesting policy that they have where people can request the removal of someone else's manipulated video if it includes them. Now that is putting, I guess in colloquial language, the responsibility on the victim to identify videos that are causing harm, but that might be a really effective way of doing it. Those people are the people that are going to know, and so it's maybe a thoughtful way of proceeding here.

Alex Stamos: It's like the creation of a little bit of a right of publicity kind of thing. Which I think is a totally appropriate, for us congressionally, at the federal level, to have that, right? That you should not be allowed to put me in AI generated stuff, if you do so, I can object. To me the free expression issue here again, when you see somebody playing Donald Trump or Joe Biden on SNL, you know it's comedy. You know. If you're watching a photorealistic Donald Trump say stuff that he never said, even if it says at the bottom, "This is AI generated," I really do think that's going to stick with people. And I think, again, we're talking about it in the political realm, but there's so many situations of where that's going to be harmful to individuals, that creating some kind of general beyond just the private law of the companies, I think it's going to be pretty important. Would you see that as being First Amendment permissible for Congress to create that I could sue somebody if they created a photorealistic version of me doing something that I find defamatory?

Evelyn Douek: If it's defamatory, I think that there's going to be room to move there. I think it's going to be tricky. I think a lot of these things are going to be tricky in terms of navigating the First Amendment. It's sort of uncharted terrain, but there are going to be ways in which we can apply these old principles. These, like you say, invasions of privacy and defamatory rules to these new materials. So we'll have to see.

Alex Stamos: Well, that's great. So there's plenty of stuff you can write to then train the next generation of GPT-4 powered lawyers,

Evelyn Douek: Right? That's right. Train the model. Okay. So speaking of the ways in which these tools can be used to harm, I mean obviously the number one way that a lot of these generative AI tools are used is in non-consensual pornography or deepfake porn of people. So now we have an enemies of progress update, which I can't do the voice for. So Alex.

Alex Stamos: Enemies of progress update.

Evelyn Douek: Wonderful. All right, so take it away.

Alex Stamos: Yeah. So this week it turns out that Andreessen Horowitz, the people who have called anybody who works on trust, safety, security or any kind of tech risk are enemies of progress who are effectively murdering people in the future because we're slowing down the development of AI invested in Civit.ai. Civit is a platform that our team at the Sanford Internet Observatory is aware of because it came up in our work related to computer generated child sexual abuse material. So child porn effectively that is generated by AI. Now, Civit has policies against children being involved and is not hosting the worst stuff, the worst stuff is happening on secret discord groups and such.

But there is a large community of people who use Civit to generate or to store non-consensual naked imagery of other people, including in some cases minors. And it really just comes down to some real definitions of is something obscene or not and such in some cases of whether this is really CSAM. So I thought like Andreessen Horowitz's little screed there was going to be mostly theoretical, but the fact that they're directly investing in a company that is encouraging and empowering some of the worst uses of AI on the planet is really compatible with thinking that anybody who thinks that teenage girls should be free of naked images of themselves being generated by AI, that those people are enemies of progress.

Evelyn Douek: I mean, it's nice of them to make the value system so stock, the way these value systems lead in just a really clear example here of how that panned out in practice.

Alex Stamos: And they do all kinds of other stuff, but just Civit has very poor trust and safety, which I guess was seen now as a benefit to Andreessen Horowitz.

Evelyn Douek: Okay, moving over to our TikTok Tick Tock, we haven't checked in with TikTok recently, but it has been in headlines again over the past couple of weeks. There's been a burst of new calls to ban the platform in the US over allegations that it is boosting anti-Israel and pro-Hamas content in a biased way. Rep Mike Gallagher of Wisconsin said the app was brainwashing American youth into sympathizing with Hamas. Marco Rubio accused Beijing officials of using TikTok to spread propaganda to Americans. Josh Hawley, I could go on. And a lot of this was picking out certain hashtags and saying that they had higher views than other comparable hashtags with the exact same words on the other side, hashtag free Palestine versus hashtag stand with Israel, for example. TikTok has come out and publicly denied these allegations in a blog post and faulted the reporting that led to them, again, sort of cherrypicking two hashtags to refute the claim saying, "Look, hashtag stand with Israel has 1.5 times the views of hashtag stand with Palestine."

I don't find this a particularly compelling way of talking about these claims. There are obviously lots of other kinds of hashtags that people are going to be using to be discussing this conflict. So picking out two of them and saying, "Look how many views we have on this one versus this one," doesn't seem like a great way of settling this debate. So opening the platform, getting transparency, having researchers look into this would be good. At the same time, I don't find the claims particularly compelling. I don't think that there's particular evidence for the claim that TikTok is slanting the discourse here. And it certainly may be the case that people are conflating pro-Palestinian content with pro Hamas content as has happened a lot over the past couple of weeks. And also forgetting the fact that many younger people especially are more sympathetic with those viewpoints.

So whatever else, I think the thing that this shows is that when these calls to ban the platform coincide with the idea that this platform is spreading content that the politicians in question don't like. I really think that that shows that this is about viewpoints and it is going to be a massive first Amendment problem as opposed to, for example, as we've talked about before, there are all of these sort of legitimate privacy concerns and data concerns and things like that. But when it's because of the speech on the platform, that's going to run into the First Amendment problems much more headfirst.

Alex Stamos: Yeah, I totally agree. So I understand where these people are coming from. As a parent of teenagers, there have been a surprising number of conversations in my house about this conflict, and I am seeing that teenagers and probably the students we teach here on campus are much more pro-Palestinian than I am in this kind of conflict, I think. And so having these discussions where I try to talk to my son about what he has seen and hearing and how he needs to internalize empathy for innocent people who are dying while also being very careful to accurately consider kind of the political and historical context here is really important.

And he keeps on bringing me videos where that has been very effective, very propagandistic videos that kind of erase, in some cases, what I've seen as being popular, is it really does erase the humanity of Israelis and Jews overall. It is very borderline up to explicitly antisemitic and uses phrases like, "From the river to the sea," which is of controversy right now. And personally I see as eliminationist and not appropriate as... I don't see that as something that if you chant on a college campus and say, "Oh, I should not be seen as antisemitic," I think you're being a little bit foolish. And people who listen to this podcast might disagree and you're welcome to unsubscribe. That is your free expression, right? Free association right.

Evelyn Douek: People on this podcast might have more complicated views.

Alex Stamos: Let's talk about that. So I understand where this is coming from, because personally I see content, but the things where I agree with you on the TikTok side is one, the content is all over the place. And also a bunch of the videos that I've seen are on YouTube, where things are being pushed much less algorithmically. Second, I think in this case, the TikTok algorithm is just accurately reflecting what young people are saying. Now, TikTok, if you're going to criticize TikTok for this, it is extremely good at giving people what they want to see.

So if you feel like you want to watch these videos that reinforce one side, then you definitely can create through your interactions with the platform an echo chamber where you do not get multiple sides of the conflict here. But that's kind of the expected model of these platforms and also comes down to a really difficult discussion around people know that, right? Young people know that TikTok is giving them what they want to see. They're not being manipulated, they are intentionally using it because it's doing that. And so to me it's a very difficult thing of whether or not you can blame algorithms in situations where people are choosing to use a platform that specifically feeds in this content.

Evelyn Douek: Yeah, I mean I think that there's obviously a lot of propaganda on these platforms at the moment, and I wouldn't say that that's exclusively confined to one particular side of this conflict. We're seeing a lot of propaganda on the other side and a lot of dehumanizing content. And if you open Twitter these days or certainly that's a lot of the content that I'm getting as well. One thing that I think that has been really interesting is how little content moderation debates there have been around these issues compared to in the past. I think if we'd had, I don't know, my hypothesis is if this had played out three years ago, I think there would've been roiling debates over whether platform should be banning, "From the river to the sea," for example, which appreciate can absolutely be used in context where it is antisemitic. I do not believe that absolutely everyone that uses it and in all contexts, it does come with those connotations.

But I do also recognize the very real harm that people experience when they hear that phrase. And so I think that'd be a big debate about why this phrase is all over every platform. And we're kind of not having the same debates that we had a couple of years ago about why a platform's letting this content proliferate, which I think shows some sort of, I don't know whether it's just disinterest or an evolution of thinking about how the role of platforms in these contexts or whether this is just such a contentious, fraught area where we really think that this shouldn't be an area where platforms should be, I guess, picking sides in the same way that they may have in other contexts. I don't know.

Alex Stamos: And I am not saying, "From the river sea," should be banned. I've never said that. I don't believe it should.

Evelyn Douek: Sure. No, that wasn't what I was suggesting.

Alex Stamos: I do understand that as a middle-aged parent, seeing that in the videos that my kids are being immersed in, that that would make you upset.

Evelyn Douek: Right.

Alex Stamos: But it's also, this is the difference between Twitter and TikTok in this, is when you look at the content, and you're right, the propaganda on Twitter is much more quote unquote balanced because it is being massively manipulated by troll farms for sure. You have just a huge flood of content that is both pro-Palestinian and pro-Israeli that is clearly coming from fake accounts, that is copy pasta that is being organized centrally. And that's not true on TikTok. When you watch the TikTok video that's got 5 million views, it is definitely a well-known young person who is recording the video on their phone. It is not fake. They are who they say they are. It is not copy pasta. It is not part of coordinated inauthentic behavior.

It is just popular. And I think you're right. It is interesting that we haven't had more debates, but I think part of it is also no matter what your side is here, you have a quote unquote side that you don't want. This thing has made it much more clear that when you call for content moderation as being the tool for controlling these conversations, that it's going to affect the arguments that you are pro. And so just in these other situations, it's been much more lopsided and the fact that you're close to almost like a 50 50 split depending on how you ask the question. In western societies where most of these content moderation discussions happen here in Europe is the place where the content moderation discussions immediately happen under any circumstance. The fact that people are so split, I do think makes it less likely for people to call because it's clear how that would be applied to the side that you're quote unquote supporting.

Evelyn Douek: And I hope, maybe naively, that one of the things that we can take from this moment and these debates is renewed belief in the importance of free speech and the commitment to free expression even when it is really hard for exactly this reason that you're saying. Is that at the end of the day, often if you have hate speech rules, they are used to suppress minority viewpoints or other viewpoints and it's extremely hard to draw the lines.

Alex Stamos: Yeah.

Evelyn Douek: Nepal, however, does not have a First Amendment. So they're taking a different approach and have just banned TikTok in the past week because the social media app has been consistently used to share content that disturbs social harmony and disrupts family structures and our social relations. So there's an alternative approach for those thinking about policy solutions, I guess. Sticking in the region. Nothing massively new here, but I do think we should draw attention to some reporting from the Washington Post this week about India because we haven't talked about India in a while.

And it is one of the overarching themes of this podcast is how important it is to pay attention to what's going on in India. And the Washington Post has had this series now of what's been happening in India over the last year, last couple of years, and it's report this week detailed previously unreported meetings between company executives and Indian officials that convened every two weeks to negotiate what could and could not be on their platforms. And just the dramatic increase in the number of requests from officials over the last few years, going from a handful of tweets being asked to be removed at each meeting to entire accounts running in the hundreds. And especially the leverage that the government got from these quote unquote hostage taking provisions once the companies were mandated to have staff locally in the country. And the additional leverage that this gave them over the companies to accede to their demands.

It details what we already knew, which is that X, previously Twitter, has completely caved in the last few years. And in October for example, blocked the accounts of two US-based groups, Hindus for Human Rights, and Indian American Muslim Council, both nonprofits advocating for religious freedom for example. So one of the things that this report noted is that this was company executives on background were giving the Washington Post these stories, but also Indian politicians were confirming all of it because to them they were just boasting saying like, "Look how effective we've been at bringing these companies to heel."

Alex Stamos: Yes. No, and you keep on seeing this in India where politicians and political actors will just openly talk. On the record, there was a takedown on Facebook of accounts that they attributed to the Indian Army. And then the Indian Army complained in the newspaper, "Facebook took down our accounts. We're angry." And then Facebook ended up putting some of it back up. So it worked. So yes, you keep on, it's just out there in India of like, oh yeah, we're doing these kinds of things and we're totally okay with it.

Evelyn Douek: Proud of it, in fact.

Alex Stamos: Yeah.

Evelyn Douek: Staying in India, we had a listener write in and ask to get your comment on a story in the BBC news this week, which we can link to, saying that a whole bunch of opposition politicians in India were getting threat notifications from Apple that there were attempted hacks on their phones. So curious for your thoughts on this.

Alex Stamos: Yeah, so Apple has now followed basically everybody else in the tech industry of starting to notify people if they see situations in which individuals are being targeted by high-end actors. So overall, I think this is a great step forward. Traditionally, as we've talked about multiple times, Apple has done a lot for the Chinese government and there is a long and very embarrassing history for them of Apple users such as Uyghur minority members being targeted by the state security services of the People's Republic of China, and that being uncovered not by Apple, but by Google or Citizen Lab or other groups, and Apple then quietly patching it without making any public statement. So the fact that they are willing to tell these people, "You're being targeted," is a huge step forward. It's not a step that they've taken in China, and the fact that they're doing in India does seem that they're finally putting their foot down of they're not going to bend to India the same way.

That being said, they did not provide any attribution here, and if you are being told by a big company, "You are being targeted by a state sponsored actor, but we don't know what state." That's kind of BS. From an attribution perspective, if you know well enough that somebody is getting hit by people who are of that level, then you're going to have some level of attribution that you can provide. And so I expect that that is the compromise they've made here, is that they're not going to say anything that can be used against them in India by the government while still trying to provide people with warning, and hopefully they go more of what we've seen from Google's threat analysis group of being a little more open about this.

Evelyn Douek: At the same time, it's like, we'll give you three guesses who it could be, the article details how it's like there's a whole bunch of opposition politicians and no one in the government has received any of these notifications apparently.

Alex Stamos: Okay, let's start. Yeah.

Evelyn Douek: It's like, we could start with the countries beginning with A, B. Yeah.

Alex Stamos: Yeah. Angola, no.

Evelyn Douek: That's right.

Alex Stamos: It starts with I. Ireland?

Evelyn Douek: We've got to watch those Irish. Yes. Okay. And I guess we should talk briefly about combining our Twitter corner and news this week about the first batch of transparency reports out of the Digital Services Act that have been submitted in the last month. We can link to a place where they're all publicly posted, and I've been sifting through them and seeing what I can gather. I mean, one of the things that these transparency reports highlight is the unsurprising news that X is devoting far fewer resources to content moderation than its peers, which is a huge surprise.

So for example, X has 2,294 content moderators in the EU compared to 16,974 at YouTube and 6,125 at TikTok, for example. So three times the amount of content moderators at TikTok and eight times at YouTube. So I mean, that comparison makes pretty stark what we already knew, not a whole bunch of otherwise surprising stuff in the transparency reports. I think I was really shocked to discover that TikTok says it only has 3.5 million users in Germany. I'm pretty sure that that's a typo because it's recording that it has a 20.9 million users in Denmark. I think they have just switched Germany and Deutschland there in terms of-

Alex Stamos: Denmark and Deutschland?

Evelyn Douek: Yeah, Denmark and Deutschland. Exactly.

Alex Stamos: Yeah, that's a little bit a mistake.

Evelyn Douek: Yeah, I was like Googling, why is TikTok so unpopular in Germany, comparatively?

Alex Stamos: That's okay. Because if there's anybody who feels okay about, they're okay about being part of Germany, it's the Danes.

Evelyn Douek: Yeah, right, exactly. It's not at all politically risky to make that.

Alex Stamos: Yeah, there's no historical resonance there for them at all. Yeah.

Evelyn Douek: Other than that, I mean Apple weirdly, instead of calling these content moderators like people, they call them human resources dedicated to content moderation. There's 607 in the European Union and do you have a guess of how many apps Apple might've removed for objectionable content in one month from the European Union?

Alex Stamos: Oh, I did not see this, so I'm going to guess 500.

Evelyn Douek: Five.

Alex Stamos: Five?

Evelyn Douek: I thought you were going to nail it, actually. I was like, once you started speaking, I was like, "Whoa." This is just objectionable content, which is a subset of the number of apps. There's a whole bunch removed for IP violations and fraud. Yeah.

Alex Stamos: Right, because when I think of the normal, the vast majority of Apple takedowns are going to be privacy violating. Here's a flashlight app that happens to pull every single piece of data off of your phone and sends it up. Here are apps that are using undocumented APIs, which, okay. So five for objectionable content.

Evelyn Douek: Yes.

Alex Stamos: Do we know the five?

Evelyn Douek: No, I mean, but it would be great if we did, I imagine. Although we could go look at the, I haven't done the follow-up and gone and checked the DSA transparency database that we've talked about on the podcast before.

Alex Stamos: And do you think those were removed only in Europe or removed globally, I wonder?

Evelyn Douek: Another great question. I don't know the answer.

Alex Stamos: Because you could figure that out. The people who track the great firewall do a really good job of pulling down the... If you call the right web services on Apple side, you can get the entire list of every app that is available in different regions by calling from different regions, and then you can compare it, and that's often used to figure out what apps do the Chinese ban. It'd be interesting actually to do that difference between the US and Europe and see if there are content moderation. That sounds like a great project for somebody who has grad students.

Evelyn Douek: If only we knew some. All right, and then penultimately, in our legal corner, so two weeks ago, the Supreme Court heard argument in cases that we talked about before on this podcast about whether politicians can block people from their social media accounts. I know, Alex, how deeply you care about this First Amendment issue and how important it's, and close to your heart.

Alex Stamos: This is the sound of my head hitting the microphone.

Evelyn Douek: I too remain surprised that people litigate these all the way to the Supreme Court, that there are people who just really want to troll their elected officials and that there are elected officials that just really don't want to unblock those bastards. It is amazing, the lengths that people will go.

Alex Stamos: I think most of it's just fake escort spam. Where like, "No, I really am into this older gentleman and want to meet him in person." Yes.

Evelyn Douek: Yeah, so actually jokey aside, for example, the Garnier case was about two elected trustees in the Poway Unified School Board district, and they'd blocked two parents in the district who had posted a bunch of criticisms of the trustees, including uncovering a bunch of improper practices. And shout out to the great Pam Karlan of the Stanford Supreme Court Clinic who argued this case at the Supreme Court and did a fantastic job, in my opinion. I agree that maybe each of these individual cases may not feel like the most important thing in the world, but I do actually think they raised some pretty fundamental issues around the limits of what government officials can do online.

And there were arguments about, for example, if a private person sets up a hotline... Sorry, if a public official uses their private resources to set up a hotline to call for emergency services, because they're only using private resources like using a Facebook page, could they act discriminatorily and exclude, for example, all Latinos from using this hotline to call for emergency services? Now, when your argument is, "Yes, I can exclude all Latinos from using this hotline to call for emergency services," you are losing, I think.

Alex Stamos: Is that a good rule of thumb in the Supreme Court?

Evelyn Douek: Yeah, exactly. It's pretty, I hope so. I really, really hope so.

Alex Stamos: I can stop protected class X from getting access to government service Y.

Evelyn Douek: Right, because I'm using my personal funds to set up this hotline.

Alex Stamos: At least since Earl Warren, Cal alum, which we'll talk about-

Evelyn Douek: Oh, yes.

Alex Stamos: That's been a problem, right?

Evelyn Douek: Nice Easter eggs all throughout this. So look, I think that the court overall is by and large sympathetic to the idea that government officials can't block willy-nilly when they're using their social media accounts. This doesn't mean that they can't block at all if they're being subjected to certain kinds of harassment and things like that, that they still could block that kind of thing, but certainly not in a discriminatory way and cutting out particular kinds of viewpoints. But there's going to be lots of consternation about what exactly the test should be when you can tell whether a particular page or account is sufficiently official versus personal, all those kinds of things.

Alex Stamos: Jesus.

Evelyn Douek: Yes. So that's what to watch for when this judgment comes down. There was this great moment where Chief Justice Roberts, who as the Supreme Court is getting up to speed on the internet, because it's got a bunch of big internet cases this term, so they're really obviously thinking really hard about this and educating themselves. Chief Justice Roberts described webpages or social media accounts as, "Just, I don't know, the gathering of protons or whatever they are."

Alex Stamos: Wait, he said protons?

Evelyn Douek: He said protons.

Alex Stamos: Okay. We need to have impeachment proceedings immediately.

Evelyn Douek: I mean, not technically wrong in the sense that everything is the gathering of protons if you think about it hard enough.

Alex Stamos: No.

Evelyn Douek: But does it answer the question in this case? It does not. He says, "It's a machine and somebody else's machine can pick it up if you want."

Alex Stamos: They can pick up their protons.

Evelyn Douek: Yeah, exactly. From one gathering of proton to another gathering of protons, I would say that is not the best way to describe the internet.

Alex Stamos: Look, there are legitimately interesting questions here. But the fact that this is by far, by far, the topic of internet speech regulation that has received the most hourly build hours, the most legal mind power, that this is the thing that Pam Karlan is arguing about whether or not local school board members can block trolls on Facebook is just such an unbelievable waste of brain resources. It drives me insane. There are so many difficult speech balancing issues on the internet right now. This is not in the top 50.

Evelyn Douek: So again, for the second time this podcast, we're going to have to disagree. I do think that there are important free speech concerns here. I do agree. Again, it does baffle me that we spend so much time talking about it, and I guess we're going to have the net choice cases and the jawboning cases are going to start to even the balance about a number of billable hours on different First Amendment issues on the internet at least. But yeah, no, I do think that it's bad form if government officials set up an account and then just exclude the people that they don't want to hear from. That makes me feel my First Amendment heebie-jeebies. That's just the technical term. Okay, you want to move on, because this is just every second we spend on this topic it's just more time on this topic that you so much care so much about.

Alex Stamos: Can we do a special episode?

Evelyn Douek: Yeah, just on this topic.

Alex Stamos: Can we interview the school board members who blocked the trolls?

Evelyn Douek: We actually probably could.

Alex Stamos: Interview the trolls who want to... Yeah.

Evelyn Douek: I've met them, when I mooted Pam for this case, they were on, they were lovely. They really care about the policy in the school board district.

Alex Stamos: Oh, Jesus. Let's have an entire alternate podcast that every week we're discussing, can you block trolls if you're a school board Facebook account?

Evelyn Douek: Listeners write in if that's what you want to hear. I'm sure we'll be overwhelmed with letters, requests to please make this limited series or unlimited series, given how much there can be to cover. Okay. We've teased it a number of times. Alex, on my ride in this morning, I saw a whole bunch of tents that had the sign big game over them. It seems like this is something I should care about, what's going on?

Alex Stamos: This weekend, ladies and gentlemen, boys and girls. This weekend is the big game, the annual challenge between the University of California and the Leland Stanford Junior University first fought over rugby, now fought over American football. It's a big game for these schools. As far as college football rivalries go, Cal Stanford is one of the most congenial, right? It will be at Stanford this year, which means there's much better tailgating because Stanford just allows people to park wherever they want on the grass and the fields and such. It's much harder to do in Berkeley, where the Berkeley PD will not just boot your car, but throw a Molotov cocktail through the front window if they find you illegally parked. But there'll be great tailgating and you'll see Cal and Stanford fans drinking wine together, literally having wine and cheese and grapes and being like, "Oh, you old so-and-so," and such.

So it is not the greatest football rivalry, but a great relationship between these two schools. Also, the entire history of Silicon Valley is people who have worked together from these two schools to create these companies. You look at Sun, which is theoretically Stanford University Networking, Scott McNeely, who comes from the Stanford side. But the operating system they put out in which they built their whole company on was based upon the Berkeley systems distribution and the number of Cal alumni who actually built the tech. And Apple, HP, all these companies have both alums, and so it is a fun game. It is especially fun for years like this year, when both teams kind of suck because it allows you to be excited because it might be a very close and competitive game, but it also means that the players have something to look forward to, even if they're not going to go to a bowl or they're not competing in the college football championship, of which neither team is definitely in the championship and only Cal could possibly go to a bowl, and even then they have to win everything.

It is also a big game for Stanford because there's a long history of the big game ruining the season of the higher ranked team, which is Cal this year, and Cal could possibly go to a bowl if Stanford loses, and so it is big game. I do recommend any Stanford students who are listening to go. I'll be saying that in my class today. I'm wearing my Cal tie. Before I was a fake professor at Stanford. I was a four-year Cal bands man as well as a sailor on the Cal sailing team. So I will be-

Evelyn Douek: Is that safe on campus? You said it's a gentle rivalry, so you're okay.

Alex Stamos: It is a gentle rivalry.

Evelyn Douek: Yeah.

Alex Stamos: No, what I find unfortunately is that Stanford students mostly don't know about the rivalry.

Evelyn Douek: Right.

Alex Stamos: And so I bet I'm going to, at the end of my lecture today, I'm actually going to play the Lights Out March, which is the song that the Cal band plays to finish football games. It includes the fight song, and I expect half of them will believe it's Stanford's fight song. Right? And so yes, I do recommend students go and enjoy it and get out there and support your team and the student athletes who, whether or not they're winning, are trying really hard and trying to represent the school. But it's going to be a big week. It's going to be a fun week, willing for any real Stanford professors who want to put a friendly wager on it to please contact me. I'm looking for Mike McFaul in the halls of Encina. We have bet some things before, so we'll see if I could find Mike and do something with him. But how about you, Evelyn? Do you want to get into it?

Evelyn Douek: Yeah, I clearly have very strong and informed views about this conflict. Definitely knew that the game was on.

Alex Stamos: You totally knew, before you saw the tie.

Evelyn Douek: I mean, I saw a sign saying, "The big game," and I definitely knew what it was referring to for sure. Yes, playing the Cal fight song in your class is a special kind of propaganda. They said online propaganda was the biggest threat, but that's...

Alex Stamos: Well, I mean, I could have done something much worse, which I could have brought my trumpet in and played. I still can play Lights Out March and Fight for California. I actually picked it up because my daughter's learning trumpet and so yeah.

Evelyn Douek: Oh, poor you. I mean, unless she's extremely good,

Alex Stamos: We have a lot of really great Hot Cross Buns.

Evelyn Douek: Is that right?

Alex Stamos: Over and over again. It's great. I love it. I love it. One day when she marches in the Cal band, all this work that we're putting into it will be worth it. If she goes and plays in the Leland Stanford Junior University marching band, then I will be

Evelyn Douek: Disown her. That's right. And never coming to Thanksgiving ever again. All right, well, in your honor, we will try and play the marching song.

Alex Stamos: The marching song?

Evelyn Douek: The fight? I don't know. You said-

Alex Stamos: We will play. We will play the march song of the Fighting Bears of Gold.

Evelyn Douek: That's it, which stirs deep emotions within me, obviously. As we read out this podcast, we will queue it up. This has been your Moderated Content weekly update. The show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino and is produced by the wonderful Brian Pelletier. Special thanks to Justin Fu and Rob Huffman. Go Bears.