Alex and Evelyn chat about Meta's latest report on information operations; TikTok's announcement of a researcher API; the last two weeks in Twitterland; Meta's announcement of paid verification and participation in "Take It Down", a new platform run by NCMEC for people to get sexually explicit images of under 18-year-olds removed; Europe's DSA is coming; Susan Wojcicki is going; and a New York online hate speech law is on hold.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to
the podcast with friends!
Evelyn Douek:
Are you a Supreme Court justice with some difficult legal issues before you? Have you been asked some very challenging questions lately that you just don't know the answer to? Well, lucky for you, Stanford Law School is here to help with some continuing legal education for Supreme Court justices in need. Go to stanfordlawschool.com/justiceCLE and enter Moderated Content for your special discount. Stanford Law School, Justice CLE.
Alex Stamos:
Perfect.
Evelyn Douek:
Does that work? Do you have any amendments?
Alex Stamos:
It's a little too serious. People might actually do it.
Evelyn Douek:
Welcome to Moderated Content's Weekly News Update from the World of Trust and Safety with myself, Evelyn Douek, and Alex Stamos. Alex, I realized I signed off our last episode with see you next week, which is a terrible sign-off in multiple respects because, first of all, this is audio. Nobody sees anyone. And second of all, we did have a break, an unusual break, for the week. You enjoyed your trip away?
Alex Stamos:
I did, yeah. I took my family to Japan, which, as always, you go to a place, like Japan or almost any developed country outside the United States, and it makes you incredibly angry at the fact that we can't build anything in the United States. My kids, on the Shinkansen... You ride the Caltrain here in the Bay Area, and your fillings are rattling out at 60 miles per hour, and then you can have your coffee with a tiny little Jurassic Park tremor on the top of it at 200 miles per hour while looking at Mount Fuji.
Oh, it was a great experience. I mean, mostly people were incredibly welcoming. It is interesting. Japan is still one of the countries in which outsiders are not always welcome. So it was a good experience. My kids faced a little bit of discrimination based upon their background in a couple of cases. It was actually an interesting lesson at multiple levels.
Evelyn Douek:
Excellent. Well, we are glad to have you back. And there is plenty of news from the fortnight that was. Let's start with Meta's adversarial threat report. It released its report for the fourth quarter of 2022, which is its every quarterly report. There's some interesting stuff in there. It shows a growing trend, increasingly growing trend. This is not new news, but government domestic operations of governments targeting their own citizens with information operations and especially showing grassroots support for themselves and targeting the opposition party.
One of the things that was amazing about these figures is what I thought were quite mind-blowing amounts of ad spend. So there was a Bolivian network that Meta took down with over a thousand Facebook accounts, 450 pages, and they spent $1.1 million on ads across Facebook and Instagram, which is quite a lot.
Alex Stamos:
Yeah.
Evelyn Douek:
And of course, all of these things as well. Cross-platform operations are one of the Serbian operations that Meta took down, which had over 150,000 in spending for ads. They found it based on a tip off from Twitter in 2020. Now, I don't know. Do we think those tip-offs are still coming through from Elon Musk's Twitter these days?
Alex Stamos:
Right. So that's one of the things that's missing from this is usually the Facebook report was in parallel to Twitter releasing data sets via their research coalition for us and other people to analyze as well as their own announcements. And so this is the first Facebook quarterly adversarial threat report that has no Twitter equivalent. And I do think that is the big thing here, is it is pretty clear that there is nobody left at Twitter to do this work. Facebook no longer has a partner, and so we're going to continue to see these kinds of releases from Facebook, but without Twitter doing the work, which is going to be bad for Twitter because what's going to happen is Facebook's going to release here are a bunch of accounts, here's a bunch of data, and groups like ours is going to be able to take the output from Facebook and then map it to Twitter.
It used to be that conversation happened between Facebook and Twitter well before the reports came out. They would collaborate. They'd give tips to each other, including data that's not available in public for phone numbers and email addresses and such indicators, IP addresses, indicators on the backend. And now it's going to be independent groups finding all of the stuff on Twitter based upon Facebook's tips. And so I think this is a situation that demonstrates how getting rid of transparency by Twitter is not going to benefit them in the long run because this stuff's just going to be discovered, but it's going to be discovered in a way that they no longer have control over.
Evelyn Douek:
Yeah. Rolling Stone had a story this week actually about how Twitter has just stopped transparency reporting altogether, not just these information operations take downs, but including government requests reporting. So some of the oldest transparency reports in the field in the area are about government request to platforms for data requests or take down requests. And those are an important tool, not just in holding platforms accountable for what they're doing, but in holding governments accountable for the kinds of pressure that they're putting on platforms. And one former Twitter staffer says, "That (beep) went out the window right after Elon came in."
So I don't know if we'll need to get Brian to beep that or if we can get an explicit warning on our podcast episode. But yeah, basically no more transparency reporting across the board.
Alex Stamos:
Do you think that was an actual decision or do you think they just fire the people that do that? I think it's the latter.
Evelyn Douek:
Yeah, it's basically the latter. I think that the story highlights that nobody there knows how to do these anymore and they're fairly time consuming. One of the people says, "I'm not aware of any people left at Twitter who could produce these transparency reports anymore." So even if they wanted to, they wouldn't know how to do it.
Alex Stamos:
Which is interesting because one of the big conspiracy theories that is incorrect is that the FBI paid Twitter to censor stuff. Right? And as you and I know, it is true that the Department of Justice pays tech companies, but it is based upon ECBA and SCA, which requires the government to pay for the costs of search warrants and wiretaps and such. This is actually something that started with the phone companies, not with the tech companies. And without those transparency reports, we now don't see any of the reason why this money is changing hands. So it is a direct example of how Twitter is going the opposite of the direction of the thrust of the Twitter files and is stepping on its own narrative.
Evelyn Douek:
Yeah. And we also don't see how often they push back. Because obviously the platforms shouldn't or don't just say yes every single time a government actor requests data or for them to take things down, and that's what these reports show. It shows which governments want the most information or data and which ones give the least substance in their request that the platforms push back the most. And we're not getting any of that information. So for all we know, Twitter could just be rolling over with these requests now and we would have no idea.
Alex Stamos:
Yep. So I just want to underline this. These are two areas in which Twitter has now no transparency... on coordinating authentic behavior by governments and governments asking for data. Twitter is now advertising governments can manipulate us and nobody will figure it out. That is the bottom line here.
Evelyn Douek:
Well, you don't think the government manipulation on platforms has just stopped magically since Musk took over?
Alex Stamos:
No, no. He stopped and then he... This is the kind of thing that if I wrote that tweet and it was on the other side, he would just respond with curious or-
Evelyn Douek:
Yeah, that's right.
Alex Stamos:
... I'm looking into it. Maybe I should register catturd3 and point this out and see if I can get him to agree.
Evelyn Douek:
Yeah, that is what we are reduced to. Anything else from the adversarial threat report that stood out to you at all that's worth highlighting?
Alex Stamos:
Yeah, I mean, I think, like you said, is this trend that we continue to see, and we've seen a bunch of SIO work, is that domestic is really the focus. Right? I think a number of governments have found that there's not a lot of ROI on manipulating other folks. Russia and China are really the only countries and probably the United States still, although generally in more subtle ways than what we uncovered before, that for most of these countries, the real value in the ROI is local.
Like you said, the amount of money being spent on advertising... A million dollars in advertising in Bolivia, that gets you a lot. Right? So that's really interesting because the CPMs there on those users are going to be quite low. So yeah, I do think it's interesting to see these countries spending all this money to manipulate the local populace.
Evelyn Douek:
Okay. Okay. So on the topic of transparency then, while Twitter threatens to turn off its API, although it hasn't done so yet I don't believe, TikTok is providing a new one for academic researchers. So it has announced that it will launch an API for academic researchers to study the platform, and there's a group of researchers at Stanford Internet Observatory that have released an analysis of it and its limitation.
Some of the most significant, I thought, was that the For You page will still be opaque to researchers. Most of the data that's released will be based on searches that you can run on the platform, but it won't show you how TikTok's recommendations work, which is the key question that people have about TikTok, and it also won't have information on violative content that has been removed.
And then Joe Bak-Coleman at Tech Policy Press also had pointed out a number of important things in the terms and conditions of being a researcher under this that include some pretty wild things like you can only keep data for 15 days and that you then bestow TikTok with worldwide free, non-exclusive and perpetual products that you produce so that they can republish it.
So there's a lot of things to be questioned. It's looking past the headline. Did you have any thoughts or takeaways about this?
Alex Stamos:
Yeah, so on the [inaudible 00:09:03] API, you brought up two classes of issues. On the technical side, we do have a bunch of recommendations. You can go to Iota at stanford.edu to read the post. Like you said, it doesn't cover the front page most people see, and that is going to be an issue.
Now, how do you do the For You page in an API is a tough one, but what you could at least do is have more statistics available about how it's showing up in For You and then have the ability to do things, to see top 50, top 100, top 1000. So that would be more CrowdTangle-like features in this API instead of, like you're saying, just being able to stream searches.
On the legal side, I don't think the agreement is going to be signable by most researchers at this point because of the idea that TikTok gets to republish anything that you create. That sounds like boiler plate to me. It is totally incompatible with any agreement that you sign with any of the non-open access journals. And so you're just not going to have academics say, I can't publish in science or nature if I use this API. But I expect that they don't actually care that much about that and that that will be removed.
The 15-day thing is probably due to privacy rules. A bunch of the API agreements say, yeah, 15 days, 30 days. And most of that has to do with GDPR and other privacy laws that require data to be deleted when it is deleted upstream. And so the only way that the company can force you to comply with upstream deletion would be to have a limit on how many days that you hold all of your data.
Evelyn Douek:
Okay. So do you have a prediction? Is this going to produce really useful research or is this something that you are still excited about despite these limitations or not so much?
Alex Stamos:
So, I mean, I'm excited that it's coming out. I think, again, they're going to have to change the agreement before anybody uses it. I see that as definitely possible. I hope that this is just MVP and they keep iterating, but the fact that they're doing it is good because thanks to a bunch of stuff at Twitter, Twitter is going the opposite way on transparency. And my fear was that that would have a serious signaling effect for TikTok and other more emergent platforms on transparency. And so the fact that they're continuing on this is good. There does need to be significant changes if you want this to in any way be seen as mitigating the risk that people see from [inaudible 00:11:18].
Evelyn Douek:
So then let's head over to our Twitter corner and do a speed run through that.
Okay. So in just the last two weeks since we spoke, some of the highlights are that Twitter hasn't actually turned off its API after threatening to despite an immense amount of enthusiasm for its new API model, which is a fantastic euphemism. We also found out that Musk-
Alex Stamos:
I mean, a lot of people are talking about it, so that is-
Evelyn Douek:
Exactly. The tweets about the API are off the charts, so they must be loving it. This is Musk's theory of popularity.
Alex Stamos:
No such thing as bad PR. Yeah.
Evelyn Douek:
Yeah, it explains a lot really. And meanwhile, Musk boosted his own tweets times a thousand following the Super Bowl when his tweet, which he then deleted, had lower engagement than President Joe Biden's, which is just amazing. I hope he takes that one to therapy.
And then what will shock everyone I'm sure, but we spoke previously about how the Taliban were buying verification now that it's paid on Twitter. Turns out that Russian government information operations people are buying it as well according to the Washington Post, a report there. And also Twitter is now charging for SMS two-factor authentication. I think that was the highlights from the last two weeks. It's been a fun one. Quick takes on that, Alex?
Alex Stamos:
Okay, so Musk, it's just like you have hear all this high-minded stuff and then he just decides the president of the United States had more reach than I did, which is not shocking for pretty much anybody. And so him completely flooding everybody's timelines with his tweets, even if you weren't subscribed to him, is just perfect. Right? It's just the encapsulation of this whole thing that the guy's having a midlife crisis that includes the need for massive dopamine hits for people to pay attention to him all the time. And you can explain almost all of his decisions from that and not based upon some actual master plan. Yeah.
Evelyn Douek:
Yeah. I have this fantasy. There was this Supreme Court case last week, Gonzalez v. Google, about whether a platform loses its Section 230 immunity if it amplifies content on its platform. And I have this fantasy that the court... And there was all this discussion about where is the line between targeted recommendations or when does the platform really take responsibility? And I'm hoping that the court comes out and will be like, well, that's a mess. We're not touching all of that. But we do know that if you boost the CEO's tweet a thousand times more than any other user, then you lose Section 230 immunity for that content.
So that's the Elon Musk rule in Gonzalez. Yeah.
Alex Stamos:
Right. Yeah. So the other one that's really security focused is the charging for SMS two-factor. Right? So this one's actually pretty complicated. I think people had a bunch of simplistic takes. SMS two-factor is not great from a security perspective. There have been a history of people having their two-factor tokens stolen through their phone numbers being stolen. Now that is extremely rare in a situation like a Twitter. That is mostly for stealing things like Bitcoin and for really high-end corporate apps and stuff where it makes sense for an attacker to spend a bunch of time trying to talk to an AT&T service rep to get your phone number swung over to a different phone.
And so the security issues with SMS two-factor for Twitter, I think, are pretty low. It costs a lot of money. Right? You have to pay Twilio a lot of money because effectively SMS is still this bizarre system where you have to do a kickback to whoever delivers it.
I saw somebody did a calculation, and the cost of delivering an SMS is something like twice the cost of getting data down from the Hubble Space Telescope-
Evelyn Douek:
Oh, my God.
Alex Stamos:
... on a per byte basis. Yes. Right. So it is cheaper for NASA to run the deep space relay network than it is for apparently AT&T and everybody else to deliver SMS. So it is a significant cost to Twitter.
The downside, moving off of SMS is the right long-term move. The problem is that that they're not doing a slow move here and they're not, I think, appropriately prompting people to move onto different kinds of two-factor. They're just doing it immediately. And so you, you're going to have single digit millions of accounts that are using SMS two-factor, not a different two-factor, all of a sudden be unprotected. And so attackers who have done takeover attacks in the past, who have done credential stuffing have been stopped by SMS two-factor, if they are keeping good logs, the day that Twitter implements this, they should then go rerun all of those credential stuffing attacks. And so you will see a bunch of Twitter account takeovers at that moment, and we should see a spike in spam and some other bad behavior.
Evelyn Douek:
Oh, great. Well, the question will be whether we notice the difference given the state of the platform at the moment.
Okay. In other news of platforms watching Twitter and going, hmm, that looks interesting, Facebook has started offering paid verification for Instagram and Facebook, $11.99 per month on the web and $14.99 on mobile, saying that you can pay and get the verified badge, some increased visibility on the platforms and prioritized customer support starting with everyone's favorite little Petri dish, Australia and New Zealand, to check out how that works down there.
I don't really have any interesting takes on this one. People are making a big fuss about it, going how can you do this Meta after seeing how spectacularly wrong it went on Twitter? Do you have any thoughts?
Alex Stamos:
I actually think it's a good thing because there's a couple things here. One, very few people were verified on Facebook. Right? So Facebook's verification has always been very opaque and limited to folks of... They had a pretty high bar, a much higher bar than old Twitter on who got verified. It costs a lot of money to do real name verification. Right? Somebody has to upload their ID. You have to do ID matching. Facebook has some cost advantages here because Facebook actually has already spent hundreds of millions of dollars to purchase a company that does ML identification of IDs that will look at a photo of a person in the photo of their ID and try to determine whether the ID is fake or not. But it does cost a decent amount of money. I think it's actually a good thing because it's going to greatly increase the number of people who are verified, and it's a much more transparent process.
The difference here between what Meta's doing and what Twitter did is there's no announcement that they're lowering the standard. So if they're still doing the same verification steps, then that's great. The real problem I had with Twitter charging for it is not that they're charging, which I think is okay. It is that they trained everybody as to what a blue check mark means, and then all of a sudden you could just pay to get the blue check mark and it didn't mean anything anymore and the trustworthiness doesn't exist there. And they allow people to go up, create accounts as other people, and you don't get taken down until that gets noticed and reported, which causes huge problems.
And so to me, it's not the charging, it is the verification that doesn't actually have any verification behind it. So as long as Meta does not lower the standard of how much verification they're doing, I think it's actually a good thing.
Evelyn Douek:
Okay. And then further, what Elon Musk does makes Meta look good and Musk being Zuckerberg's best friend these days. Meta has announced a program that Facebook and Instagram will be participating in, Take It Down, which is a platform run by NCMEC, or the National Center for Missing and Exploited Children, where people can submit nude, partially nude or sexually explicit images or videos taken of them when they were under 18, and they will be given a digital fingerprint, they'll be hashed, and then participating platforms can check uploads to their platform against this database, this platform run by NCMEC, and have these images removed. This sounds like a great thing. I think it's excellent. PornHub is also participating, which is another fantastic thing to see. Any thoughts on that?
Alex Stamos:
No, I think it's great. It builds on the work that started while I was there on NCII for adults where adults could... They ran a number of tests and then really rolled out a big program where adults could upload images. The problem for children is it is illegal for Meta to hold onto illegal images of children. NCMEC is the only organization that could do it. So I think this is a great opportunity where it looks like I expect Facebook did most of the technical stuff, but that NCMEC is hosting it and providing the legal cover.
It's good to see PornHub. PornHub has a real NCII problem as does all the other porn streaming sites. So I hope that this is provided to them in a way that they can do it. The challenge those companies are going to have is doing this kind of matching on video is computationally way, way more expensive. A Facebook or Google can afford it, but for some of these porn sites, whether or not they'll be able to computationally handle it will be interesting on the actual videos.
So I think that's great. The interesting legal case here is there has not been a change in the law around self-generated content to give... NCMEC cannot bless you as not trafficking in child pornography if you are creating your own images. Right? And so it has become much less likely that kids get prosecuted in these situations where they have taken images themselves and those images are... If they send it to one person, but then that person forwards it on, the original victim being prosecuted, that is much more rare. But technically it's still a crime. And I've heard some pretty bad stories about that, and I think it'll be interesting to see how NCMEC and Meta navigate trying to provide legal protection to kids who use this. Because in theory, if you have a hyper conservative local DA who wants to punish anybody who ever sent a nude image, this could create an opening for that.
Evelyn Douek:
Wow. Okay. Well, that's terrifying. And I haven't seen any reporting about that specific aspect of it. So yeah, that will be interesting to see.
Alex Stamos:
Yeah, this is a question. We have an event coming up that NCMEC's General Counsel's going to be at, and I'm definitely going to ask her this, is like how do you see it? I think part of the question is, does NCMEC do a report? Let's say you upload your image to this. Does it go into the same pipeline as the cyber tip line where NCMEC now will send a report to local law enforcement? Because that's an interesting question. I expect that they're not doing that, but are they legally allowed not to do that? It's just a very complicated situation that I think will probably have to get fixed at the federal level itself.
Evelyn Douek:
Okay, great. Well, I'm sure they'll get right on that, Alex, and clean up that law in the next session of Congress. No worries.
Alex Stamos:
Yeah.
Evelyn Douek:
Okay. Let's switch over then to a government or lawmakers that do actually pass laws. So let's go over to Europe. News coming out of Europe this week is TikTok is now banned on European Commission devices following the federal ban and a bunch of states over here. I don't know if you have any further thoughts on that, Alex, except that it does seem to be a real snowball effect here with these bans coming into place all over the place.
Alex Stamos:
Yeah. And TikTok is still going and briefing people on Project Texas. They seem to have not changed their path here. I think there's not a good indication of whether or not that's going to work with CFIUS, but even if it does work with CFIUS, it's clear that the EU is going to have their say here too.
Evelyn Douek:
And meanwhile, last week was the reporting deadline for companies to report on how many European users they had, and if they had over 45 million European users, then they would qualify as what I like to call V-LOP, but I hear many of the people now are calling VLOPS, which is a very large online platform under the EU Digital Services Act. And if you are a VLOP or a V-LOP, you're going to have to comply with the most stringent requirements under the DSA, which includes a lot of risk assessment reporting, independent auditing, access to data, things like that.
And so many of the obvious usual suspects fall in there. So you have your Facebook, your Instagram, your YouTube obviously. But also in that category is TikTok and Twitter and Pinterest. Not in that category are platforms like Snapchat, Roblox and Spotify, for example. So it'll be really interesting to see how these platforms go once they start having to comply in coming months.
Alex Stamos:
Yeah. I'm looking forward that we're going to be able to have a section about the failures of the DSA reporting called Belly VLOPS.
Evelyn Douek:
That's right. Yeah, we will need to line up a sound effect for that one. But yeah, it'll be informative at least.
Meanwhile, just something to note, passing of the torch, Susan Wojcicki has announced that she's stepping down as YouTube CEO. Now I took this-
Alex Stamos:
No!
Evelyn Douek:
Yeah. I personally took this very hard because as someone that has had a years long campaign to get Susan Wojcicki to testify on The Hill, because she somehow managed to avoid all the CEO hearings that have happened over the past few years, this is a massive blow to that campaign and it's not looking good at this stage. But those are my sophisticated thoughts on this transition. I don't know if you have anything else.
Alex Stamos:
No, I guess. Do we know who's taking over for her yet? Is it...?
Evelyn Douek:
Yeah. Neal Mohan. Yeah.
Alex Stamos:
Neal Mohan.
Evelyn Douek:
Yeah.
Alex Stamos:
So are you going to have a Neal Mohan to The Hill? I mean, one, who would've predicted that Elon Musk would've beaten Susan to The Hill representing a V-LOP. Right?
Evelyn Douek:
Yes, that's right.
Alex Stamos:
A speech platform, which is almost certain to happen.
Evelyn Douek:
Yeah. I mean, it is only a matter of time. I mean, congratulations to her. She outlasted the might of my occasional tweets expressing angst about this.
Alex Stamos:
She has one of the most impressive runs of anybody in this kind of position, that she massively grew YouTube, she competed very aggressively and effectively against Meta and other companies. While you and I have issues with YouTube from a content and trust and safety perspective, you can't deny her success here. And quitting while you're ahead, I think that's actually a great thing.
She has, I think, quite a large family. I've met some of them and I think she's deciding that she's just going to enjoy her life, which I just like to see that. I mean, I think that there's way too many people who just decide that you're going to work past the first billion and ignore everything else. Like say you have 13 kids and ignore them and instead have a midlife crisis of buying companies just to throw out a possible theoretical situation.
Evelyn Douek:
Well, I'm sure she will regret not having the pleasure of finding out how to comply with the DSA and missing out on that great adventure, but I wish her all the best.
And then finally in the legal corner, so just because we did a podcast on this a couple of months back with Professor Eugene Volokh, who had challenged the New York Hateful Conduct Law, which was passed in the aftermath of the Buffalo Massacre, where New York State was requiring platforms to publish a hateful conduct policy that said what they were going to do with what the state defined as hateful conduct and have a complaints mechanism. So Eugene Volokh and co-plaintiffs, which included the platform Rumble, challenged this law as a violation of the First Amendment and compelled speech. In many ways, this law looked like the transparency mandates under the Texas and Florida laws, which are headed to the Supreme Court next term most likely.
So we got the ruling on the preliminary injunction on that last week, and the court has enjoined the law, saying that there's a substantial likelihood of success. And so that law will not come into effect while the court resolves the case on the merits.
So if you're interested in more detail on that, there's a podcast with Professor Volokh a couple of months back.
Thank you. Excellent. And then of course it was Supreme Court's tech law Super Bowl this week. We don't need to discuss Gonzalez and Taamneh here anymore, but in case you don't know how podcasts work and haven't actually subscribed to this one and so have somehow missed the two episodes that I did with Supreme Court correspondent Daphne Keller on those hearings, you can go back in the feed and listen to those. We had a good chat about the oral hearings that happened last week. I don't know if you listened to them live in Japan while you were on holiday, Alex.
Alex Stamos:
Somehow I missed it. I listened to your coverage and that was good enough. Do you think on a Meta issue, would this be an argument that you would've liked video for? Is this a situation in which we should have CSPAN in the courtroom in [inaudible 00:27:04].
Evelyn Douek:
I mean, in general, I would like to say yes, but I got to say, the plaintiff's lawyer on Tuesday was so bad that I was already cowering under the desk and wanting to jump into a hole out of embarrassment for him that I think it would be even worse if there were video. It was a shockingly bad performance, and I was already finding it somewhat hard to listen to. I'm not someone that deals well with someone else being embarrassed or myself being embarrassed. So in this particular case, I was glad to be spared that.
But yes, in general, I can't understand why we wouldn't have videos. For all the talk of transparency and the importance of transparency, A, we don't have video in the courtroom. It's only recently since the pandemic that we've had these audio recordings. And Daphne was saying, if you go into the courtroom, you are told that you'll be locked up if you say anything unruly. So it's free speech for all in that context.
Anything else before we wrap up for the week?
Alex Stamos:
No. Looking forward to whatever crazy stuff happens in the next week. I'm glad to be back.
Evelyn Douek:
Excellent. Well, I hope you recover from your jet lag and look forward to chatting next week, but not seeing each other because that is not what happens in the audio format.
This has been your Moderated Content Weekly Update. This show is available in all the usual places, including Apple Podcast and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't have been possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory and it is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman.