Moderated Content

MC Weekly Update 7/4: Trivial Pursuits

Episode Summary

This week, Musk continues to hamstring his own product and his "CEO" is nowhere to be found; apropos of nothing, Meta is launching its Twitter competitory Threads on July 6; the Meta Oversight Board set up a show-down for the company in Cambodia with its recommendation that the company suspend the Prime Minister's account; an Indian court dismisses Twitter's challenge to governmental blocking orders, and Evelyn fails miserably at some sports trivia.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Twitter Corner

Legal Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos:

So what are you doing for the Fourth of July?

Evelyn Douek:

Studying all July Four trivia that could possibly find, Alex, in light of your threat last week.

Alex Stamos:

It will not save you. Last minute studying cannot protect you from what is about to happen.

Evelyn Douek:

Oh, God. What are you doing, apart from terrorizing random Australians?

Alex Stamos:

I am on vacation. I'm celebrating 20th anniversary with my wife early because this is the only time, first time in 16 years that our three kids are all at camp or doing something else, so we are ... Once you have kids, celebrating things like anniversaries becomes something [inaudible 00:00:36].

Evelyn Douek:

Immovable, immovable [inaudible 00:00:38].

Alex Stamos:

Exactly. It's based upon the weather, the moons, and yes, the stars have aligned. We can take a trip.

Evelyn Douek:

Well, congratulations, I obviously approve of the very wise and I'm sure uncontroversial decision to record a podcast at the start of this anniversary celebration.

Alex Stamos:

She's extremely supportive of my podcast.

Evelyn Douek:

Excellent.

Alex Stamos:

She is definitely an enabler of this addiction.

Evelyn Douek:

And with that, welcome to Moderated Content Weekly, slightly random and not at all comprehensive news update from the world of trust and safety, with myself Evelyn Douek and Alex Stamos. Alex, we're headed straight to the Twitter corner this week. And what a sad trombone week it was, a big week.

Alex Stamos:

Oh, my goodness.

Evelyn Douek:

For the company trying to make its product just as difficult as possible to use in every single way, truly amazing.

Alex Stamos:

I would play 10,000 sad trombones, but we'd get rate limited.

Evelyn Douek:

Yeah, so only 9,999 sad trombones for us this week as a result of two supposedly temporary changes that are being mad. So the first is that web users must log in to view tweets, you can't view tweets as an un-logged in user anymore. And then the much bigger change or potentially is that Musk announced limits to how many tweets a user can see in a day, so it's now ... It started lower, but it is now at 10,000 for verified users, 1000 for us lowly unverified users, and 500 for new unverified users because, Alex, definitely the thing that you want to do when your product relies on people creating and consuming content is limiting how much of that content they can consume. That sounds like a fantastic business move.

Alex Stamos:

A famous business strategy if you want to be the most influential social media network is to hide all the content and to make sure nobody can see it.

Evelyn Douek:

I mean, this is crazy, and the company runs on advertising and advertising content, so advertisers must love this with the company making it even harder for people to use the product and spend time on it. It is supposedly taken ... This measure, Musk said, was taken due to extreme levels of data scraping, and in particular, every company doing AI allegedly, extreme all caps, all of the AI companies have just come and started scraping the treasure trove that is Twitter. Again, I feel kind of like the person complaining that the food sucks and the portions are too small because the product that is Twitter is rapidly degrading, and yet, I'm still kind of amazed that they're limiting my access to it. But it just seems a pretty inexplicable move here, Alex. Is there any way to rationalize this?

Alex Stamos:

No. Okay, so is scraping a problem? Yes. So every social media company faces a whole set of trade offs in this area, that if you want to prevent your content from being wholesale copied by others for a variety of purposes, you have to put some kind of protections in place. And so what a number of companies do is, they do allow content to be seen publicly, but then they have limits, and they enforce the limits on a per IP and per cookie basis. IP's get tough, that's a harder thing every year because the IPB4 space is not growing as fast. It's way too small to support the internet, and so you have lots of people sharing IPs. And so to do it on a per IP basis to put limits turns out to be quite complicated and you have to be very careful about what's called carrier-grade NAT and such. But you basically have to have it very flexible, and so that's why you use a lot of cookies.

This is an interesting trade-off since we want to talk about ... We talk about policy issues here. This is a really interesting policy trade-off in that if you want platforms to keep people from pulling all the data down, for both proprietary intellectual property purposes, which is what Musk seems to be hinting at here when he talks about AI companies, but also for kind of privacy issues. There's this kind of fuzzy privacy discussion around public content, about whether or not it is the responsibility of the platform to prevent it from being copied on a wholesale basis. The Cambridge Analytica argument, a bunch of it comes back to: What responsibility do you have to prevent people from seeing other people's content? Is it okay for me to see a couple of your posts that are private? But I shouldn't be able to download all of them and such.

And so to do that, the way companies do that is they set cookies on browsers that come in that are in-authenticated. And they use that cookie for rate limiting, and then they rate limit the creation of new cookies from different IP addressees and such. So if you do those two things together, you can make it harder. The unfortunate truth is if you have a public social network, you cannot completely stop scraping, you just can't. What the other side can do is they can generate thousands or tens of thousands of agents that come to your site and suck the data down, but you can make it a lot harder for them. Right? And that's the trade-off that from a policy perspective, I actually am the one who had to go to Brussels and explain, and a number of European countries, because a bunch of European countries got mad, Facebook sets up cookies to prevent scraping, that cookie was set, and that you can't opt out of it because yeah, if you opted out of it, it would defeat the whole purpose, anybody who could scrape.

But they also did not want Facebook to be wholesale copied because it kind of gets rid of a bunch of the things that GDPR is supposed to do, like the right to delete your own data and stuff disappears if people have a complete copy of the social network. So there is a hard trade-off here, but Musk is handling it the completely wrong way. Right? So as we've had a bunch of situations here, he's got the right idea that there's a problem, and then he thinks he's smart enough to reinvent when other people have dealt with this for 15, 20 years. There's years and years of work on this, including by old Twitter, but also at other social media companies. And he didn't think to ask anybody. And this is about the worst way to possibly do it.

One, it just makes your platform not useful. One of the values of Twitter was a place that people could speak, and then that speech could be seen. It could be embedded in Washington Post articles. Remember the good old days when you could tweet something that was important, it could end up embedded in a news article or something, or embedded in a blog post, or medium post, or a Substack. So that's all often un-authenticated. Right? And so people aren't going to have logged into Twitter cookies when they look at those pages. And if you want that kind of stuff to happen and you want to be influential, you have to have the support.

The second is the rates limiting here, people who are doing intentional scraping can afford the cost. It's also, they're doing it in a way that doesn't actually prevent the really bad guys from doing stuff because truly bad people are going to be doing things like, one, paying for verified accounts, also, just mass creating accounts, and then using those cookies even if it's more limited to go pull a bunch of data down. And it doesn't actually help at all with the authentication, which is Linda Yaccarino's kind of argument here.

Evelyn Douek:

Right. So yeah, this is exactly what you want from the CEO, inspiring leadership here from the new CEO of Twitter. Four days into this dramatic company policy change, Linda Yaccarino finally pokes her head above the parapet and makes a statement with the inspirational tweet this morning that when you have a mission like Twitter, you need to make big moves to keep strengthening this platform. This work is meaningful and ongoing. This is part of our work to ensure the authenticity of our user base, so that's her I guess post hoc rationalization of what's going on here because I'm sure she was kept informed at every step of the way about this important strategic move that the company was making.

Alex Stamos:

She's totally wrong. Linda, if you're listening, you're being lied to. This does not help the authenticity of your user base. The fact that you guys are selling blue check marks without any kind of authentication is absolutely the problem. Limiting the APIs does not make accounts real. It does reduce, yes, the ability for open AI to train GPT five or whatever off of Twitter or equivalent. It does not do what you're being told it does. And the other problem here that Twitter ran into is they did this so immediately that their own JavaScript code was not set up to appropriately handle the error requests here, and so a number of people demonstrated that Twitter was actually DDoS-ing themselves because if you get rate limited and you're either in the normal twitter.com front end, if you're in the Twitter mobile app, of if you're on TweetDeck, all of those things that massively reissue requests over and over again, because they don't properly handle ... An appropriate front end option here would be you get a rate limit and you back off for 30 seconds, or 90 seconds, of five minutes, or something.

Instead, they just ask over and over and over again, give me the content, give me the content, give me the content, so they don't have proper error handling. And so this is exactly the mechanism the People's Republic of China uses to DDoS their enemies using what's called the great canon, which is, you turn the great firewall and you inject JavaScript that has a bunch of requests on a very tight loop. They're doing the exact same thing to themselves, and so it's a ridiculous ... Obviously, Musk's explanation here does not make any sense. This is probably from capacity issue. And it might be related, people have been pointing out, that they owe Google at least tens, if hundreds of millions of dollars on back fees, and that they've decided to stop paying for cloud services. He also shut down the entire data center. Twitter had three data centers, now they're down to two.

And so if you cut down all your capacity in both your cloud and your self-hosting capacity, eventually you will run out. And it seems that this might've been a mechanism to try to save that, except they probably made things worse in different ways. Obviously, a request you handle in the front end with the rate limit might be a much lighter weight than actually serving up somebody their Twitter feed. But it's probably a post hoc rationalization for an emergency step for them breaking the ability to actually service people.

Evelyn Douek:

Yeah. I mean, it's hard to think of a good analogy for what it must be like as CEO of a company while this clown show is carrying on as the company is just making itself harder to run and literally being Sideshow Bob, stepping on the rakes and hitting himself in the face on all of this, as Linda desperately tries to reassure advertisers. One move that they made this week was that Twitter is definitely a grown up company and has rejoined the Tech Coalition which is an industry group that is dedicated to trying to fight child sexual abuse material online, so that's a welcome move given this has been ... Your own work has shown that this a growing problem on the platform as well. Curious what you make of that and how this fits into the broader sort of industry issues around child sexual abuse right now.

Alex Stamos:

Yeah. So Yaccarino's problem, she's the CEO of a tech company, where her chief technologist, or her CTO, is incompetent and she can't fire him. Right? So she's effectively not the CEO. If you are the CEO of a company and you cannot fire the person who's in charge of your tech stack and you're a tech company, then you're not CEO. You're something else. Right? So that's the fundamental problem here.

On the child safety, joining the Tech Coalition is kind of just the basic thing that you do as a tech company working on child safety, so I'm glad they ended up paying the fee. I guess it was $40,000 as the membership fee to be part of it. Like you said, our research showed that Twitter's child safety stuff is just straight up broke, again, possibly capacity issue, possibly services falling down and the people who maintain those services have been fired, or laid off, or whatever. So I'm glad to see them rejoin. Actually, our team will be at the Tech Coalition meeting this summer, briefing them on our research. And so it'll be interesting to see if Twitter has somebody there who's talking to rest of the industry about these child safety projects.

Evelyn Douek:

And you want to talk about a statement that has been issued by researchers and academics in the EU about the proposed child sexual abuse regulations that are being worked up over there. Can you talk about that?

Alex Stamos:

Yeah. So there's kind of an open letter, was just released, talking about the European Union is in the middle of rule making around the Digital Markets Act, Digital Services Act. And one of the controversial things has been a number of statements that have hinted towards requirements to scan content that are ... The requirements are incompatible with end to end encryption. So this has been a concern of advocates for a while, is the possibility of kind of a back door ban on end to end encryption, not by saying you can't do end to end encryption, but saying you have to do these five or six things, and then requiring five or six things that are actually impossible to do if you're protecting people's privacy.

And so this letter was signed by a wide variety of almost all academics, mostly European, some American academics, most European. And I agree with 70% of the letter. I mean, I think it is totally right for them to be concerned about these rules. I'm a strong proponent of end to end encryption. I have done way more than almost any of those people to protect end to end encryption, so I think I feel pretty good about the amount of time I've spent in Congress and in Europe fighting for end to end encryption to be something that companies can ship. But there's a real problem with the letter, is that it really dismisses the possibility of client side scanning as just a bad thing. And there's a couple of problems, one, they talk about it not being effective, which is just false.

Client side scanning in certain circumstances is something that absolutely can work, and I have seen it work. The kind of classifiers that they dismiss as being impossible to build, I've seen be built. Most of the time, those have been built server side, but there's no reason why they cannot in the modern era be operating on the client side. And so they're just factually incorrect in their dismissal of the possibility of this working. The second is, from my perspective, this is actually kind of I think a typical thing among tech and policy academics, is a real detachment from the actual abuses. They're imagining a world where the next big step that EU's thinking about here is that Facebook has promised that all DM surfaces are going to be end to end encrypted. This is a problem that's been hanging out there for five years now or something. It is taking a very long time for this to happen. So who knows how real it is? But that has been the concern, is that already Facebook offers WhatsApp, which has over a billion users.

But then if they turn it on for Instagram and Facebook, then all of a sudden, a huge amount of content goes end to end encrypted and things like PhotoDNA hits and stuff are automatically going to disappear, as well as kind of the reactive stuff that law enforcement can do currently sending warrants to Meta to get access to this data. All that's going to disappear. And academics think that this kind of thing can happen and that you could end up three, four billion people talking to each other completely un-moderated with one another, and that's just going to be an okay thing. I'm just going to say that's not going to happen. We're not going to end up in a world where three, four billion people can just interact, especially with discovery. The only thing that keeps WhatsApp from being much worse is the fact that you have to have somebody's phone number to use it, same for the most part with iMessage. Right?

My [inaudible 00:14:56] see with platforms like Telegram that have discovery, the amount of abuse, the amount of unwanted imagery, the things that get thrown people's way is going to be really bad if you expand this out to everybody. And that is going to include lots of situations in which the person who is victimized is a participant of the conversation. So in the situations where the damage from a trust and safety issue accrues inside of the conversation, so we're not talking about a criminal conspiracy between adults, but a situation where somebody says something publicly, and then they get death threats privately, or a child is on there and a child gets ... An adult tries to talk to them and pretend to be another 13 year old and to solicit nudes from them. Those kinds of situations I think client side scanning is completely appropriate and it's honestly going to be necessary.

I think it's going to be a necessary thing for us to implement as an industry if we want to be able to get through the political reality of turning on end to end encryption for every single person on the planet. And that's where I disagree with these folks because they dismiss client side scanning as just a bad thing overall and they don't carefully think about: Are there situations in which client side scanning can actually benefit one of the users without violating the privacy of the conversation? Because what you're doing is you're prompting the possible victim to be able to report things. And so the grooming detection they said is basically impossible, that's just false. Grooming detection on Facebook works actually quite well. How will grooming protection work with end to end encryption? Well, the only way to do it would be to do it on client for the recipient of any message and having the client side classifier that if you're a 13-year-old and an adult reaches out to you and says, "Hey, I've got nudes of you," that gets classified and they get prompted.

Looks like this person's asking for nudes, do you want to ask for help? And if some percentage of the kids then hit yes, then you could greatly reduce the effectiveness of that approach by those bad guys. And so that's why I did not sign the letter and that's why I actually think the letter's mistaken. I would like to see some of these people remove their names or to do a new letter saying, "Oh, you know what, maybe this is actually more nuanced," because I do think the letter is kind of disconnected from the reality of both the abuse and the realistically what you do with client side scanning to protect people who are part of the conversation.

Evelyn Douek:

Yeah. All of that, that you just said sort of sounded very measured and perfectly logical and so I don't know that our listeners would have necessarily understood that some of that was prob pretty controversial in sort of-

Alex Stamos:

Yes. I'm going to get a bunch of hate mail from law professors in Denmark, so that's fine.

Evelyn Douek:

The worst kind of hate mail.

Alex Stamos:

I can survive that.

Evelyn Douek:

What's your background?

Alex Stamos:

Worst kind of hate mail.

Evelyn Douek:

I mean, I think this is just an area to underline for our listeners who aren't knee deep in these debates where people tend to be fairly absolutist about their positions on this. And so that kind of attention to all of the trade-offs, and in particular I think the attention that you're talking to about the particular features of a particular service doesn't tend to happen, that kind of nuanced debate doesn't really tend to happen, so super interesting.

Alex Stamos:

There's a huge, huge difference of end to end encrypted networks where you can just look up a stranger, especially if you look them up with their real name. And that's the problem with rolling this out to Facebook and Instagram, is if you could just send a message to any celebrity, that is now end to end encrypted for which there's very little enforcement, then that's going to become a disaster. And so, you can one, leave that totally up to the celebrity for them to have to report every single thing they don't like, or they turn off open DMs, which reduces the effectiveness of the end to end encryption if people are not allowed to talk to each other, or a good compromise position is you have a client side classifier that they can set and say, "I don't want to get death threats. I don't like this."

And they automatically get classified that you can say, "I want a report." Maybe you can even say, "I want to automatically report any death threats using my client side classifier because I trust it enough." Right? Or use it [inaudible 00:18:43]. I think there's a lot of options here that this letter just totally ignores.

Evelyn Douek:

Okay. So apropos of absolutely nothing, Meta is also launching its Twitter competitor this week in two days, July 6th, on Thursday. So I mean, this is, obviously given all that we just talked about the dumpster fire that is Twitter and the clown show that is Twitter right now, seems like a pretty well-timed launching in some ways. Although, it is kind of a shock, a surprise that it's taken this long. I mean, Twitter has been a bit of a dumpster fire. We've been on death watch for quite a while now. Are you bullish on Threads, the Meta, Twitter competitor, Alex?

Alex Stamos:

Yeah. So I agree that it has taken a long time. First, I think this is at the top, Zuckerberg should've ordered this work to happen when Musk made the offer for Twitter. Right? Just as an option because Facebook already has all of the systems in place, the backend storage systems, the database systems, all the intermediate services, as well as the human beings for content moderation, classifiers, all that stuff, all of the pieces that are the hard part of the social media network, Facebook's already built. It has generally built those things in a way that is rather abstracted out and does not have to be tied directly to any service. And this is because since the Instagram acquisition, Facebook's architecture has changed significantly, so that a huge amount of the backend stack is shared between Instagram and Facebook, which makes total sense.

You're not going to run Facebook and Instagram completely separately. It would make no sense financially to do it, but the benefit of doing that work means that launching new service that uses the same kind of thing of, hey, I want to generate a feed of content using a privacy graph, that code already exists. Right? And so it is kind of shocking [inaudible 00:20:24]. That being said, I think it's going to crush it. I think it's going to totally ... My prediction is Threads is going to crush Twitter, and partially because there's a huge amount of demand that we see, and nobody has stepped up to catch that demand. And demand is for a service that is easy to use, that has reasonable content moderation. Lots of people disagree with some of the content moderation decision Facebook made, but very few people are going to look at Twitter and then look at Facebook and be like, "Oh, man."

The number of people who are like, "I prefer the Twitter model," where it is just going to be the kind of deplorable. Right? It's going to be people who are intentionally racist, who are sending death threats, who are spreading disinformation, the Russian troll farms, the RFJ juniors, all those people are going to want to stay on Twitter. But there's a huge number of people who do not want to be on a network with those folks. Right? And so for them, there's going to be a huge attractiveness to that. And nobody's [inaudible 00:21:14] do that. Bluesky tried. Bluesky has failed from a technical perspective to scale, so they've had to vastly limit the number of people come in. As we discussed here, they didn't even have content moderation on day one. Right?

So they had a basic misunderstanding of: What is it that people try to buy into when they buy into a social network? Facebook's got all that, and they've also got the existing billion users who are ... This is going to use Instagram's login, so they've already got a gazillion people logged in, and they have most importantly, hundreds of thousands of already verified accounts. And so the fact that they have a verification system, I think the throwing away the blue check mark was one of the dumbest things Musk could do because it had such brand value. And so having a platform in which verification means something, and you're like, "Oh, yes. I'm reading tweets from Chuck Todd, it really is Chuck Todd." Right? I'm reading tweets from Kim Kardashian or whatever, really is Kim Kardashian, is going to be incredibly important, and so I do think Threads is going to dominate because it is finally the place that enough people will move over to that the network effect of kind of who is left on Twitter that is not part of the super right wing groups that those groups will be able to move over en masse because nothing ...

Mastodon is too hard to use. It's missing basic functionality like search because of kind of political religious discussions within the architecture, the people who design Mastodon, and so Mastodon just did not build a product that people want to use, and it's just too hard for folks. And something like the distributed nature of Mastodon is something that's politically interesting to a bunch of super geeks, but for the vast majority of people, they don't really care. So yeah, I think Threads is going to do really well. And you're going to see the majority of people who have not left Twitter that this is going to be an instant move because most of them are at some point already going to have a Facebook or Instagram account, so the transition's going to be really easy.

The thing that'll be fascinating and I think actually also can be related to what Twitter did here is discovering who your ... Recreating your Twitter graph on Instagram is going to be much harder, and that's actually another possibility, I just thought of it now, that Musk knew this was coming. Another reason to limit all this access would be to make it hard for Instagram to go look at your Twitter graph and to go copy it, which is not ... When people made the initial jump to Mastodon, they had not yet put any protections in place, and so you could pretty easily go and find people's Mastodon names and recreate that graph pretty quickly. And so that will be the harder part here. That being said, they already have people's phone books and stuff, so based upon that, it's at least the people who are personal friends and such, you should be able to follow automatically.

Evelyn Douek:

Right. Yeah. I do think that is an underappreciated part, like when you're saying the transition is going to be relatively easy. I'm not so sure for myself personally. Right? The way I use Twitter is completely different to the way I use Facebook or Instagram. I barely use Instagram. I set it up just to follow a whole bunch of ... There was a period where I was really into running and followed a whole bunch of runners and learned about their diet and training regimen.

Alex Stamos:

Cool.

Evelyn Douek:

That is not what I use ... Yeah. It was very fun, but that's not what I use Twitter for, and that would be not particularly useful for me, so it would be another fairly high friction process for me to start another account, so I'm going to wait and see, see how it takes off, if people do go there because I did try Mastodon. That didn't really work for me. My community didn't really make the transition. I've been on Bluesky the last couple of weeks, a lot more law professors and people are there and I've found that pretty good. But it's still not quite the same, so I hope you're right, that there is going to be something that takes the place of what Twitter once was, but [inaudible 00:24:35].

Alex Stamos:

I think the fascinating question here is in theory, they've advertised that Threads is a support activity pub, which is the standard under Mastodon calc key, so not compatible with Bluesky, but with the majority of what's called the Fetaverse. And that's become very controversial in Mastodon circles, the fact that there's lots of people saying that this is an evil way to take over the Fetaverse and stuff like that. It might be. I think it also might just be kind of a digital services act play, or I'm sorry, Digital Markets Act play, where Facebook, while facing all of these competition investigations by supporting federation, they've realized that the vast majority of people are going to use their servers and they're going to see their ads because they don't care about the political stuff.

And if you care that much about, I don't want my data on Facebook servers, then you can go run your own and still be part of the ecosystem. But that's going to end up being 1% or something of the overall ecosystem. And so I think this might be actually a pretty smart move where Facebook can have their antitrust cake and eat it too, of making money from the majority of the network while also keeping it open enough. Now that will be interesting from a content moderation perspective of: How do you handle moderation in a federated situation? And it's also going to be interesting from a privacy perspective because Facebook for example has GDPR responsibilities that Mastodon currently does not. A bunch of people on Mastodon talk about the fact that right now they have not gotten approval to launch in Europe this new thing from the Irish.

Well, nobody who runs the Mastodon server has asked for permission from a data protection authority. Right? The entire Fetaverse is GDPR noncompliant. You cannot do a right to be forgotten, you cannot ask for your data to be ... All of the things that you expect out of a centralized company to be GDPR compliant, you have no guarantee of having any of those services. And if that's the way ActivityPub works is people can make complete copies, talk about scraping. ActivityPub is as if people looked at Cambridge Analytica and that's a great architectural model for how I want to build a social network. Right? And so it will be fascinating to see. How does Facebook live up to the privacy responsibilities they have under the FCC consent decree and under their agreements the Irish, with the protection authority, while also supporting activity pub, that's going to be quite hard.

Evelyn Douek:

Yeah. And the content moderation piece of this, obviously something that I'll be watching closely and very, very interested in seeing how they deal with the new servers of content moderation. We've talked about for example, RFK Jr. got his Instagram account back a couple of weeks ago, or maybe a month ago now, as a result of announcing his bid for the presidency. So how is Threads going to deal with the kinds of posts that are going to be coming from that? Now the question might be: Will the Meta oversight board be given jurisdiction, as we lawyers might call it, over Threads? It is the Meta oversight board after all, and not the Facebook oversight board anymore. My bet is no, not any time soon, because they're still working, busily working away on their six decisions that they're going to release this year, so we don't want to overload them too much.

But speaking of the Meta oversight board, so that'll be interesting to see. But they did deliver a decision that was fairly striking this week, so they did order Meta to take down a video that Cambodia's prime minister, Hun Sen, had posted because it was inciting violence. And they also recommended, and they only have power to recommend this, they didn't have binding power to make the company do this, they recommended that the company suspend the account for six months in light of Hun Sen's history of use of social media to incite violence, and also, the broader social context and history of violence and repression in the country, in particular in the lead up to the election.

So this was a pretty strong decision from the oversight board. It contrasts pretty mightily with some of the more equivocal decisions that we've seen, for example, throwing the Trump suspension case back to Meta for the company to make the decision itself. It was pretty unequivocal on this front. And it is not as sort of speech protective as it has been in many other cases where it has really sort of started to develop a reputation for really being much more sort of free speechy, I guess. This is obviously not that. So it's interesting to see in light of the company's decision, the prime minister responded by quitting Facebook altogether, so it actually ...

He gave Meta a nice out there. It didn't have to decide what to do with the account in terms of whether to uphold the board's decision to suspend it for six months because he beat them to it and has threatened to block Facebook in the country entirely. And now the prime minister is on TikTok and Telegram, where I am sure that he will be very respectful and not at all inciting and there will be no content moderation problems that TikTok or Telegram ... Well, Telegram won't deal with this at all, but it will be interesting to see how it plays on those platforms, given that probably the same kind of material is likely to continue to be posted on those sites. So yeah, I thought it was an interesting decision, Alex, of a pretty spicy move by the oversight board.

Alex Stamos:

Yeah. I think it's the right decision. I'm glad, like you said, they seem to have endorsed the idea of political actors having a certain level of ... They're going against this idea that political actors can get out of things like incitement rules. So it does seem a reversal, but I think it's a correct reversal there and a good reflection of the fact that these things that we see as kind of just culture war issues in developed democracies become really serious safety issues in countries that have more political violence like Cambodia. So yeah, I think it's the right ruling. It does, like you said, raise an interesting question of the relationship between the oversight board and the new Threads product because like you said, the RFT Jr. Example, Instagram is not a great platform for disinformation because of just kind of the way things spread and you have to have photos and all this.

I mean, there are issues, but it's not like what you see on Twitter. And so if you build a direct Twitter competitor, then these kinds of rules are going to be more interesting. It'll be interesting to see what the oversight board says about that.

Evelyn Douek:

Yeah. And it'll be interesting to see how they apply this decision going forward in other places. This is not the only world leader that uses social media to incite violence. And the board's made it pretty clear that Meta shouldn't allow a newsworthiness allowance for material that directly incites violence, and so we'll see how broadly this comes to be applied. One note, I mean, I think this is a strong move by the board. But the video was originally posted on January 9th, 2023, so six months ago. The board accepted the case in March, so four months ago. And there is an election on July 23, so the board is really sort of squeaking in under the wire in terms of making this decision right at the last possible moment before the election when the content had been on the website for six months.

And one of the things that the board is saying is that this leader has a long history of posting dangerous kinds of content. And so the reason the board gives is the risk of imminent violence that requires removing his content and suspending his account, it's interesting to note that it took six months to come to that conclusion and issue this decision, so we will see how it plays out.

Okay, and now over to the legal corner. Okay, so two very quick updates. The first, I just want to take a very quick victory lap, which was that in May when Montana first passed its ban on TikTok and this set of picture perfect plaintiffs emerged that brought a First Amendment challenge immediately. It was a small business owner, a Marine Corps vet, a rancher whose enjoyment of TikTok helped her recover from postpartum depression, they were all right there ready to bring this First Amendment challenge the day the law was passed. I said, "I'd like to know a little bit more about how these five random plaintiffs managed to be so well coordinate and so well funded to get such a fancy law firm to bring this."

Well, it turns out that this week the New York Times have reporting that TikTok is funding this challenge and is paying all of the legal fees. And look, it's a smart move. It was interesting, TikTok is of course challenging the Montana ban itself as well. But it sort of gave these plaintiffs a week in the headlines before they brought their own challenge because it's a much better story to see these individual plaintiffs bringing the challenge. And also, it's a vindication of a different set of rights. It's not the company's rights to operate in the state, but the rights of these people to have their free speech and the rights of other Montanans and others outside the state to hear what they have to say. So it is legitimately a different kind of legal challenge. It's not just the state or the company's same arguments, but it's still funny that this is the way that the legal system works, that this can be done like this.

So the other small update is that for those keeping track of the NetChoice restatement of the law of the First Amendment. NetChoice has now brought its challenge to the Arkansas age verification law that was passed a couple of months ago, which requires platforms to verify the age of users and restrict minors from creating profiles without parental consent. This is a strong First Amendment complaint that we talked about at the time. I think it has very good chances of succeeding, so not at all surprising there. But the lawyer at NetChoice continue to be very busy.

Over in India, an important decision this week, so this week an Indian court dismissed a case brought by Twitter in July last year that was challenging a number of blocking orders that the Indian government had issued Twitter. And it also imposed a five million rupee fine, which sounds very impressive, but translates to about $61,000 for not complying with the blocking orders. I read the judgment and it's a pretty breathtaking document. From word go, it doesn't really give an credence to Twitter's arguments and fully credits the government's arguments and doesn't really show any concern for freedom of expression. I'm going to sort of greatly oversimplify the issues of 109 page ruling here. And also, a thank you to a bunch of local lawyers who have done great analysis on this, and special thanks to Vasudev Devadasan for sending me useful things to read on this.

I think there's two really major parts of decision. First is that Twitter had argued that the law did not allow the government to order Twitter to block entire accounts and only allowed it to block individual tweets. And the court says, "Look, yeah, sure, that is what the law says. But if we were to interpret this law literally, we wouldn't be able to effectuate the spirit of the law, the purpose of the law. And it says that banning a specific tweet may encourage the tweeter to get into a better luck next time approach. Instead needing to ban an entire account could serve a deterrent effect and thus better serve the objects of the law. And something I think that's really interesting here in response to Twitter's argument that banning entire accounts was a disproportionate response, the court cites Twitter's banning of Donald Trump and says, "Look, Twitter itself banned entire accounts sometimes and knows that it's can be a proportionate response in extreme circumstances."

And I quote, "There's nothing unusual in that." Now of course, it's a little bit auspicious to say that the decision of a private company to do something is the same as the government ordering a private company to do it, but I thought it was interesting that's something that the court was siding there. And then the other part of that decision was sort of a dramatic weakening of the procedural requirements that the law would require the government to comply with in issuing these blocking orders. Something that's amazing about this judgment from my perspective is that because these orders are confidential, Twitter's challenging 39 orders, but we don't actually know the content of those tweets because they're all confidential and they're not even in the judgment.

So we do know that for example, one of them were to do with the farmers' protests in 2021, this massive political protest that happened in the country. And eventually, the government backed down on a bunch of things, but we only know that because the court sort of extracts a little bit about that in the judgment. But we don't know sort of what the these tweets said or whether they were legitimate. But the court does say that it found the content, it reviewed the content and found it outrageous, treacherous, and anti-national, and so it agreed with the government that these were acceptable things for the government to order Twitter to take down. So it's a pretty grim judgment I think for and prospects for what we've talked about many, many times, the worsening state of freedom of expression online in the country, the fact that the court imposed this fine, which it imposed because Twitter sort of had delayed in taking down the content and it also thought that the litigation was sort of [inaudible 00:37:06] and frivolous. That's obviously going to create chilling effects for future litigation.

Twitter could appeal, so that would be interesting. But I mean, this was a challenge that was originally brought in July, pre Musk in the Dorsey era, and so it's a very, very different situation at the company there. We've talked about it many times. Just last week on the podcast, we were talking about Twitter's, Musk's tweets about bringing Tesla into India as soon as possible. And so I won't be holding my breath for Twitter to bring an appeal against this ruling, which unfortunately creates a pretty grim precedent, I think for free expression online in the country. And that is a speed run through the legal news for the week.

Alex Stamos:

Nothing happening this week.

Evelyn Douek:

No, not a lot going on. There shouldn't be this much news on a holiday, but there you go.

Alex Stamos:

Oh, talking about holiday, I think it's time for what everybody has been waiting for. It is time for Evelyn to be quizzed about America.

Evelyn Douek:

This is going to go so well, but I'm sure that both Americans and Australians are going to be very happy with the outcome of what happens here. Let's go.

Alex Stamos:

I'd like to thank our colleague, John Perrino, who does lots of research for the show, also helped me research a number of questions here. So as a tenure track law professor, clearly I should not be asking you about anything that's kind of the written part of what makes America. But what makes America great are really some of our traditions that are not written down in the US Constitution. So first, let's do some sports corner.

Evelyn Douek:

Yes, my strength as all our listeners know.

Alex Stamos:

What sporting event has the longest winning streak of all major sports?

Evelyn Douek:

The longest winning streak of all.

Alex Stamos:

Yes. There's an international event that has the longest streak in all of sports. What is that event?

Evelyn Douek:

And it's something to do with America, so I'm going to pick the most American sport that I know. I don't know. Is it football?

Alex Stamos:

Well, it's certainly not ... Yeah. So I don't think there's a good American football international. And it's not baseball because actually, the United States loses in baseball all the time. So the answer to this is actually-

Evelyn Douek:

Oh, I see. Sorry. Okay, sorry. The question is: Where does US have the longest winning streak as [inaudible 00:39:21].

Alex Stamos:

It just happens to be also the longest winning streak in kind of international competition.

Evelyn Douek:

Is it swimming, Michael Phelps?

Alex Stamos:

No. This winning streak, to be clear, was 132 years old, so it's 132 years of a representative of the United States winning.

Evelyn Douek:

Wow. No, I'm completely lost. What is it?

Alex Stamos:

Okay. It's the America's Cup of [inaudible 00:39:42]. And it's called the America's Cup because actually the boat that won was called The America. And so technically, the winner is the New York Yacht Club, but generally representing all American sailing against other countries. Now I also mentioned this one because Australia has, since the end of that 132 year old streak, has won more than their fair share. But still, that stands as the longest winning streak in all of international competition. Also talking about sports, what quintessentially American sport has a seventh inning stretch, where the participants get up, the sport being so boring that you have to build in time for the audience to get up and to stretch themselves and to lift themselves up?

Evelyn Douek:

I mean, all I know is innings and boring, the answer has to be baseball.

Alex Stamos:

That's correct, baseball.

Evelyn Douek:

Yes.

Alex Stamos:

Yay.

Evelyn Douek:

Definitely making friends with that answer.

Alex Stamos:

Right. Do you know what song is traditionally sung during the seventh inning stretch across the country?

Evelyn Douek:

No, I do not.

Alex Stamos:

Take Me Out to the Ball Game is the answer. And there are now regional variations of that. So after Take Me Out to the Ball Game, you'll have other kind of more popular songs like Sweet Caroline or Don't Stop Believing, depending on ... That's more of a regional thing, depending on [inaudible 00:41:01].

Evelyn Douek:

I don't know how that song goes. And seeing we're humiliating me on this podcast, do you want to sing it for me so that I know next time I'm at a ball game?

Alex Stamos:

I know the words, but I'm a terrible singer, so we're just not going to. But it starts with take me out to the ball game. And key, it's also, I think the whole thing is actually an advertising trick because it says, "Buy me some peanuts and Cracker Jack," so I think it was the peanuts and Cracker Jack vendors association that started pushing the singing of that song. It was pretty smart.

Evelyn Douek:

Excellent American traditional getting commercial advertising in there at the same time, wonderful.

Alex Stamos:

Okay. So Fourth of July is when there is another great American sport, hotdog eating, the Nathan's Hotdog Eating Contest. In 2022, obviously America came in first. Clearly, this is a sport the United States is going to dominate. What country had the second for this event, the runner-up? Where was he from?

Evelyn Douek:

So I have a feeling that I have listened to a podcast about this at some point long ago. And I have no idea if this is right, so I'm just going to take ... Is it Japan, by any chance?

Alex Stamos:

It is not. It's Australia. I can't believe that an Australian came in second in the hot dog eating contest and that was not front page news.

Evelyn Douek:

Exactly. It probably was, I just don't read the sports section. We're all very proud. Congratulations, whoever you are.

Alex Stamos:

Yes, James Webb is his name. And he is the only one who is possibly close to defeating the United States. The Nathan's Hotdog Eating Contest has not been around as long as the America's Cup, but perhaps still will pass it up with 123 years of domination by the United States.

Evelyn Douek:

Probably not the same person, same competitor though. I don't think the longevity is going to be that good.

Alex Stamos:

Okay. Final question on our Fourth of July with an Australia touch. Which sitting United States senator has an Australian mother? I guess she'd be Shelia. Would you call somebody who's old enough to be a grandmother a Sheila? Or is that ...

Evelyn Douek:

Sure.

Alex Stamos:

Okay.

Evelyn Douek:

Yeah. Again though, I'm surprised that we don't talk about this all the time in Australia as a point of national pride because the American senators are really something that we are glad to be contributing to. So yeah, no, I could not tell you.

Alex Stamos:

Well, it is John Ossoff, the reasonably new senator from Georgia. His mother, Dr. Heather Fenton, is a veterinarian and came to United States at 23 years old.

Evelyn Douek:

What was the choice behind this extremely random set of ... Did anyone think that I would be good at sports trivia? This was never going to go well for Evelyn.

Alex Stamos:

Okay. I guess we moved into article one or something, but you're going to beat me in that, [inaudible 00:43:45]. It's not my specific skillset, which is knowing about sailing. But yet, all these are situations in which there was an Australian link. That's the idea. So it was Australia that broke the America's Cup winning streak. It was an Australian who got second place and whose mother ... You know. Anyway. We tried. It was something that you might feel some kind of link, but anyway.

Evelyn Douek:

It turns out I just don't have good knowledge of either American or Australian trivia, so even despite the careful assist, I was not able to bring this one home. I apologize for my lackluster performance.

Alex Stamos:

It's totally fine. We'll let you keep your green card for another year.

Evelyn Douek:

I don't have it yet. I hope the officers don't listen to this podcast or that it doesn't factor in.

Alex Stamos:

Oh, my goodness. Hopefully ICE is not listening right now. Yeah.

Evelyn Douek:

That's right. Deport her immediately. All right.

Alex Stamos:

Or make her go to lots of baseball games as penance.

Evelyn Douek:

I don't know. Yeah, God. Would I do it? How many baseball games would I sit through in order to keep my job and life in this country? It's not that many I will have to say.

Alex Stamos:

It's more than one. It's less than 100.

Evelyn Douek:

It's definitely within that range. All right, so with that July Fourth themed conclusion, that has been your moderated content weekly update. This show is available in all the usual places including Apple Podcasts and Spotify, and show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't have been possible and maybe would've been better if not for the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory. Thanks a lot, John. And it is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman, and Happy Fourth, everyone, especially the ICE officers if you happen to be listening.