Moderated Content

MC Weekly Update 2/7: Requiem for the Bots

Episode Summary

Twitter announced it is shutting down free API access, oh but wait it's okay bots distributing "good content" will still be allowed. Alex and Evelyn discuss that, and: NYT reporting on Twitter's progress dealing with child sexual abuse material; India's new Grievance Appellate Committees; Pakistan's temporary block of Wikipedia; Meta's denials of bias in moderating content related to the Ukraine war; some terrible ideas from lawmakers; and the upcoming hearing about jawboning at Twitter that will almost certainly include jawboning.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Alex Stamos:

Do I sound okay? Yeah, you should be. It's in RØDECaster stereo.

Evelyn Douek:

Yeah, you sound...

Alex Stamos:

Like I normally sound?

Evelyn Douek:

Yeah, you sound like you... you will not be defeated. You will not be defeated by this sound deck. If you think you have a fix...

Alex Stamos:

I do not have a fix. Let's just do it.

Evelyn Douek:

All right. Next time.

Alex Stamos:

At least that works. As long as that works.

Evelyn Douek:

That's all we need. It is the crux of this show. This show rises and falls on that sound effect.

Welcome to Moderated Content's weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos.

Alex, I thought we should start today with a moment of solemnity and reflection for all the bots, both past and present, that have made our experience of social media what it is. If I would like to remember @AltTextReminder that kindly told me off every time I forgot to add alt text to my text pictures on Twitter and @ThreadReaderApp that made people's bloviating slightly easier to consume. And @McEscherBot that gave me inspirational artwork in my feed. And finally, of course, @MagicRealismBot, whose recent hits include, "A pianist is murdered. The killer turns out to be the philosophy of Immanuel Kant." And, "A chimney whispers to a duchess, 'I wish I was a croissant.'" Thank you bots for your service. We will miss you.

Alex Stamos:

May they're memory live into eternity, especially the New York Times first said bot, without whom we would not know that on March 28th, 2019, the New York Times first used the word deadass.

Evelyn Douek:

It's a sad day, it's a sad day, but the show must go on, Alex, the show must go on and we must bring out listeners, the trust and safety update, which takes us to our Twitter corner. So the reason why we are mourning the bots, the bad, the sad news of the bots is that Musk announced this week that he's going to shut down free access to Twitter's API and all the bots, or a lot of bots started tweeting out that this was going to cause them to stop functioning. For the non-technical listeners, Alex, could you please explain what free access is and why this would cause our favorite bots to go silent?

Alex Stamos:

Yeah. So an API is an application programming interface. In this case, it is a mechanism by which you can interact with Twitter with code you've written instead of using their web interface, or their mobile app. It's the way that third party apps have operated, those got cut off from API access a couple of weeks ago. It is the way that bots post, at least the ones who are official. So it turns out this is not the only way that you can post onto Twitter. You can automate interaction with the undocumented APIs that power Twitter's mobile device, or with the webpage. And so this will not get rid of all the bad bots, but good bots who are trying to follow the rules would use the API.

And it's a way that academic researchers such as ourselves have kept abreast of what is going on Twitter and done our research on disinformation, child safety, self-harm, and all those kinds of other things. So APIs are a very important part of interacting with the platform, like Twitter, and Musk has said he's getting rid of the free access, which means that people who have done things for fun like bots, people who have made tools like TweetDelete that don't make them any money, but they provide for free as a service to folks that there's really no economic model that should work with them unless the pricing turns out to be effectively new.

Evelyn Douek:

The good news is that, "Bots providing 'good content' that is free will be allowed to continue to use the API," from our tech overlord, Sir Elon Musk, who responded to feedback over the weekend. He's always listening so carefully to what his people want and said that in order to make sure that good bots can continue, that that access will still be enabled. I don't know who is going to judge what good content is. I fear that, I think we know that it will be bestowed from on high. And Elon's conception of good content is not my conception of good content. But anyway, we'll wait to see what comes of that.

Alex Stamos:

Yeah, it's amazing. He came up with this innovation that all you have to do is say, "We're only going to allow the good things." I think if Mark Zuckerberg could go back in time and say with graph API V1.0, we're only going to allow the good users of this API, that would've gotten rid of Cambridge Analytica and all of the problems that he dealt with there. Gosh, I just wish that we had Elon with us in the 2013 to 2014 timeframe. He could have fixed a lot of problems. Just add the word good in front of it.

Evelyn Douek:

He's such a visionary. It's so true. I can't believe no one thought to differentiate the good from the bad. It's just what have we been doing? This is not just the end of the bots, as sad as that is. This also has massive public interest and research implications. As you were saying, Alex, all of the academics and outside researchers use this and the Coalition for Independent Technology Research, at least a group letter this morning, an open letter calling on Twitter to ensure that APIs were studying public content on the platform remain easily accessible and calling on regulators to also step up their game in response to this. This has been signed by dozens of organizations and over 200 individuals just highlighting how important this is. This was the Twitter that Elon Musk has repeatedly said that he wants to be much more transparent about what's going on in this platform and the decisions that they're making. And this is exactly as we feared when this takeover first happened. I think we talked about how Twitter has really led in terms of transparency and now it's falling behind and it is really sad.

Alex Stamos:

It is, yeah. Twitter has been the leader here and it is through their example that we have any kind of transparency from Facebook, and YouTube, and TikTok, and such. Once again, to the Mark Zuckerberg's the happiest man in the world, thanks to Elon Musk, Musk has now giving him a really strong argument to close CrowdTangle. And if a couple of companies decide to follow Twitter here, we will end up a situation where only bad guys will be able to get access. That anybody who wants to follow the rules, anybody who wants to follow the terms of service, anybody who has to deal with IRBs and legal departments at universities will be at a huge disadvantage, versus the bad guys who will still scrape Selenium headless browser to grab all of this data to run bots, to run influence campaigns and stuff. So unfortunately, getting rid of the API does very little to get rid of the bad behavior that he complains about and does a lot to get rid of the people who have helped fight against that bad behavior.

Evelyn Douek:

In other Twitter news this morning, the New York Times had a story, a good story on an investigation that it's been doing into Twitter's handling of child sexual abuse material or CSAM since Musk's takeover. Alex, you're quoted in the article, so can you give us a brief rundown on what they found?

Alex Stamos:

Yeah, so what Michael Keller and Kate Conger did was they want analyzed the claims from Twitter that Twitter didn't used to care about CSAM, but now all of a sudden they do under Musk. Musk has some kind of not very well credentialed child safety advocates who haven't actually done any work in this space who have endorsed his plan and such, but had never really gotten the official organs who have done this work to endorse them. And so the New York Times both did their own testing and talked to those folks. In the Times testing, what they're able to do is they searched for certain hashtags that they thought would be possibly used by pedophiles to trade content, or to sell content on Twitter. And they did so in a headless browser where they took the results and then they ran the results through photo DNA via the photo DNA service provided by Microsoft.

And then they worked with the Canadian Child Protection Center to make sure and to double check what they did. So this way the Times reporters did not have to look any of this stuff. They reported it to the Canadian Center. The Canadian Center under Canadian law is allowed to go classify it and such, and then to verify their findings. And what they found is that there's a bunch of content that they found including one video of a prepubescent boy being sexually assaulted. This is what we'd call A1 in Tech Coalition terminology in the classification standard for child sexual abuse material. It is the worst kind and that had been viewed 120,000 times and had been up for quite a while on Twitter. And what was surprising to the Times and to myself is that this is something that hit on the photo DNA.

So Twitter is part of the Technology Coalition, which is the group of companies that work together to stop child sexual abuse online. They have access to the exact same hash banks provided by NCMEC and the Tech Coalition. And so there really isn't a good reason for content to be up on Twitter that can be caught by this Microsoft API. This is not new content. The real challenge in CSAM is the new stuff and that's where if you're Facebook or Google, you can have a bunch of people working on AI in Twitter, I can understand not so great place to build ML and AI in that space, but for known stuff that's already in the hash banks, it's really not acceptable for it to be up there. So I think that's a real regression and that's something Twitter needs to investigate, is how is stuff that's in the hash banks ending up back up there.

The other things that they had were less quantitative is they spoke to both Thorn and NCMEC. Thorn is a nonprofit, NCMEC is a nonprofit that is chartered by the US government and sponsored by the US government to do this kind of work. Twitter used to help pay Thorn as other tech companies have for their child sexual abuse material detection technology. They no longer pay Thorn for that. They no longer participate in that. And then NCMEC effectively said that they've gotten way fewer reports from Twitter and the numbers of reports they've gone do not match up with Twitter's public claims.

So Musk has made broad public claims about the number of accounts taken down, but those do not show up with the NCMEC reports, which if the Musk's claims are true, is a violation of the law, because as a electronic service provider, Twitter has a absolute legal obligation to report all that stuff to NCMEC. So yeah, just backing up what a number of people have been talking about in the trust and safety world, which is Twitter nuked a big chunk of their trust of their child safety team. The engineers and investigators are gone, and this is what's going to happen. Systems are going to break. This might just be another red light on a microservice status board inside of Twitter, but the impact of this is instead of recommendations not working that great and such, you end up with child sexual abuse material being found on the site.

Evelyn Douek:

Yeah, the story was pretty shocking and how this was the lowest tanking fruit in terms of being a responsible platform owner and the stuff that they found was being promoted by the recommendation algorithm. The Canadian organization was reporting this stuff and having to report it multiple times in order for CSAM to be taken down. And just astonishing amounts of it is what they're quoted as saying. And the Australian regulator also has a cameo saying that she had been unable to communicate with local representatives of the company, because all of the agency's contacts in Australia had quit or been fired since Musk took over. And this is some of the best technology there is in terms of identifying infringing content, but you still need people to do some of the stuff that results in action being taken.

So like you said, one of the big flashing red lights on the platform. Meanwhile, Ella Irwin, the new head of trust and safety, who is developing into a big part of this story, Bloomberg had a good profile of her this week about how basically she's Musk's right-hand man on a lot of this stuff. She had a tweet thread this week explaining some of Twitter's new approaches to a suspending accounts, I guess that they were getting flack for suspending people. Wait, what's going on here? I thought this was a free speech platform. And responding in part as well, obviously to reports around the BBC documentary being taken down. Hilariously, the thread is extremely poorly threaded, which is just absolutely perfect and broken.

But she talks about, "How we've had to suspend accounts, remove documentaries when the proper reporting process was followed. People often disagree with our actions and don't have the full context, but we still have to take action. We do allow satire and we try to reject legal demands for removal of satirical accounts, or content. Sometimes we are successful, not always, not everyone has a sense of humor." So this is showing exactly how stiff their spine is in terms of resisting legal demands for content to be removed.

Alex Stamos:

She really gave it to Narendra Modi there by implying he doesn't have a sense of humor.

Evelyn Douek:

Yeah, exactly.

Alex Stamos:

There's a lot of tears in Delhi tonight.

Evelyn Douek:

That's right. One of the things that was interesting in the thread, I thought if it comes to fruition, is she says, "We don't suspend users for posting reported content unless it's clear that user knew the content was illegal. We do ask for it to be removed before the account use is resumed. Soon we'll display the specific removal reason publicly," which actually would be extremely interesting and extremely valuable. If we remember some of the stuff that went down with the BBC documentary was that we didn't actually know why some of this content was removed, whether it was due to copyright claims from the BBC, or due to demands from the Indian government. So if they actually come through on that, that could actually be a meaningful step forward in terms of transparency. But I won't be holding my breath.

Alex Stamos:

Yeah, I think that'd be huge. The disabled codes as we call them in the industry of why was something disabled, why was it taken down? I think that's a fantastic thing. Boy, would that be a wonderful thing to be able to access Vietnam API, so you could tell whether or not a platform is being biased in decisions, or something like that. It's less useful when it has to be seen by a human being page by page. But despite the fact that Musk called me a propagandist from proposing that exact idea months ago, maybe I've actually broken through, so I'm going to take credit for this one, that he takes... Maybe a propagandist for him, I run a propaganda platform, he sees that actually as a compliment. And what he was saying is, "You run a propaganda platform and that means I'm going to take your advice seriously and send it to our head of trust and safety." So that's just my headcanon on this one.

Evelyn Douek:

Yeah. Well, I don't know, Alex, haven't you been following propaganda, so extremely successful. After you'd said that, as a propagandist, Musk had no choice, he just blindly found himself following that advice because it was so compelling.

Alex Stamos:

Yes. I think it's exactly what Josh Tucker said, right?

Evelyn Douek:

Exactly, in our episode a couple of weeks ago. Okay, speaking of questionable legal orders from governments. So India has set up its first government headed Grievance Appellate Committees. We talked about this a couple of weeks ago. They're bodies that are set up under an amendment to the new information technology rules, which effectively allow users who are not satisfied with a decision by a platform on whether to remove or moderate content to file an appeal to the government and the grounds for removal can be extremely broad. We've talked about this before as well, that India requires platforms to remove content that threatens the unity, integrity, defense, security or sovereignty of India, or friendly relations with foreign states or public order. So we now have three of these Grievance Appellate Committees. What are we going to be calling them? GACs? GACs, I don't know. We'll come up with an industry consensus on that one.

And the chair people, or chairmans actually is accurate are government offices and the members are retired government officials and senior executives. And so we've got a retired police service officer, retired Navy Commodore and one interesting one, a former traffic service officer of the Indian railways who I'm sure is absolutely chosen for their social media expertise and not any previous, or current ties to the BJP in any way. So this is basically to put it in some sort of terms like the Meta oversight board, except run by the government. It's as Orwellian, or as an affront to free speech as you can get really. You'll be curious to see how they handle this, whether they can handle the influx of repeals that I'm sure they'll get and whether there's going to be any transparency at all around what happens and what platforms do in response. So something to watch.

Alex Stamos:

Yeah, obviously the ability for this tool to be used to manipulate these platforms is a huge deal, but there's always these practical issues. People have discussed having these backup councils. But as I've said a couple of times, I'm pretty sure Facebook makes more content moderation decisions in an hour than the US Supreme Court has made in its entire history since 1789, probably by an order of magnitude or two. And the idea that you can have real judicial proceedings on individual tweets or whatnot is a real problem. I just don't see practically how something like this could operate. So this will be really interesting to see not just what kind of manipulation the government tried to push through here, but just practically how they think something like this could operate.

Evelyn Douek:

Yeah, I actually was doing these numbers this morning for the podcast that I'm releasing with the Night First Amendment Institute at Columbia University on platforms in the First Amendment. And in a single minute in the third quarter of 2022, Facebook took down 23,339 pieces of content and YouTube took down 5,653 channels, comments and videos. So yes, I'd say orders a magnitude.

Alex Stamos:

And so how many Supreme Court decisions are there? Is there thousand, maybe?

Evelyn Douek:

It's 170, I think. Ah, what is it? 275? It's in the hundreds. In the low hundreds.

Alex Stamos:

Okay. So we're down into seconds. The entire history of the Supreme Court represents-

Evelyn Douek:

First Amendment decisions, sorry. To be clear, that's First Amendment speech decisions. Yeah, you probably can... [inaudible 00:17:01] EPT could probably tell us in an instant, inaccurately, how many Supreme Court decisions-

Alex Stamos:

It would give us an answer. It'd be very confident. It'd write an answer like, "Really? [inaudible 00:17:09] and I." And we'd be like, "That sounds wrong, but the computer says it."

Evelyn Douek:

And no dispute, no dispute. Definitely a much harder task. We'll see how the traffic service operator enjoys being a glorified content moderator in his new job. Just over the border, Wikipedia was blocked this week in Pakistan for blasphemous content. Actually, just before we started recording, it has been unblocked, after having been blocked for three days, after intervention by the Prime Minister directing the unblocking order and saying that censoring the entire Wikipedia is not necessarily a suitable measure to restrict access to some objectionable or sacrilegious matter, but he has constituted a cabinet committee exploring and recommending alternative technical measures. As far as I could tell, I couldn't find any exact citations to exactly the content that Pakistan was trying to get Wikipedia to take down. The Wikimedia Foundation released a response at the time urging the government to rescind its decision, saying we received more than 50 million page views per month in Pakistan just showing the sort of damage that there would be to its citizens from taking this kind of blood measure. But this is what's happening. Nah, these orders are real.

Alex Stamos:

And Pakistan has been a constant problem. Blasphemy laws there are so incredibly broad that they're used both more actual religious blasphemy, but also it seems for political purposes too.

Evelyn Douek:

Yeah, shocked. And in Meta news this week, Meta published a blog headed Why Claims of Bias in our Content Moderation Review processes are wrong, refuting a number of recent public reports that have claimed that our content moderation related to the war in Ukraine is biased. This blog post is raising a lot of questions already answered by this blog post. I hadn't actually heard of the allegations of biased until Meta publishes blog post, but yeah, what's your take on this one?

Alex Stamos:

So I am not sure what the allegations are either. So obviously there's something going on. I expect it is mostly in the Ukrainian or Russian language media, and that's why we've missed it. I will say though, there is a history here during the Russian invasion of Ukraine in 2014, which is before my time at Facebook, but I saw the output of some of the investigations on this. There were a bunch of decisions that were being made by individual content moderators that ended up being biased. And this comes down to just a real practical issue for the companies that if you hire people that speak these languages, they almost certainly have a dog in the fight. So if you need to build a content moderation team who speaks Russian and Ukrainian, those are going to be Russians and Ukrainians. Or they're going to be Ukrainian partisans of both sides who speak both Russian and Ukrainian.

And inside of Facebook in 2014, there was a bunch of moderators who were overturning each other's decisions that were taking really broad decisions and such. And a bunch of oversight had to be put in place where people who felt a little more neutral and the ability to follow the guidelines and a bunch of people were fired and such. So I would not be shocked if there are individual decisions that turn out to be somewhat biased in one way or another, because in the end, human beings are making these decisions. And unless you do really, really aggressive oversight into those languages when there's a war like this, then you're going to end up with individuals putting their thumb on scale.

Evelyn Douek:

And in the blog post Meta says, "Our regular audits of sites that review Ukrainian and Russian language content, as well as other languages across Central and Eastern Europe have shown consistently high levels of accuracy and are in line with our results across other review sites and other languages." To which I say, that's fantastic. Why don't you release the data and show us the results? I actually think that releasing audit results of across different languages would be an incredibly valuable thing, because one of the-

Alex Stamos:

Extremely valuable.

Evelyn Douek:

Extremely valuable. One of the biggest problems that we have seen with these platforms is that they are absentee landlords in large parts of the world, they're in some of the most populous regions with different dialects and don't do proper content moderation in languages other than English. And so if they have data disproving that, or showing that they've gotten better, it would be wonderful to see those results. And I think that that would be a great thing to see from a lot of other platforms as well.

Alex Stamos:

In the end, probably better for them. This is an exact example of where transparency is in the benefit of the company. Most people don't understand that there's these massive quality assurance teams within the content moderation teams at these companies that do multiple levels of audits and investigations and all that kind of stuff. And so just demonstrating, "Hey, we know that this is a problem and we care about it and we're doing this work," I think is a better place than the kind of assumptions people will automatically make.

Evelyn Douek:

This week in TikTok and don't forget about the app stores as gatekeepers section, there's the escalating efforts to ban TikTok have now turned to the app stores with Apple's App Store and Google Play Store coming into focus. So the Senate senator Michael Bennett, from Colorado, was calling on Apple and Google to bar the Chinese-owned platform, TikTok from their app stores. Honestly, I'm surprised that it took so long to, for someone to remember the app stores. Remember when Parliament was booted off? I'm surprised that the Republicans had forgotten about that and this Democrat in realizing that it could be a point of leverage, and we've talked about this on multiple levels before, that we don't think are an absolute ban is constitutional, or the right way to go. And we don't think app stores are a great place to be putting that kind of pressure. Honestly, I was just surprised that it took this long, but here it is.

Alex Stamos:

And I'm also surprised is Michael Bennett, he's been actually a pretty reasonable voice on this kind of stuff. You expect this from Tom [inaudible 00:22:25], but you don't expect it from him. And I think it's, just to be on the record, I think it's totally the wrong thing. You and I have talked about the legitimate concerns people have about TikTok. We've talked about the limitations in the Project Texas in ameliorating those concerns for American citizens, but we don't fight fire with fire here. This is the mechanism by which the people's Republic of China, they tell Apple, "Pull WhatsApp out of the store." They tell Apple, "Pull the New York Times out of the App Store." We should not turn around the same way and utilize the oligopoly these two companies have for censorship. It's just a totally inappropriate thing for the United States to do and even to consider. And I think it's an inappropriate thing for a US Senator to [inaudible 00:23:02].

Evelyn Douek:

In other inappropriate things for US congresspeople to bring up and authoritarian-esque flooding authoritarian policy. We have Republican representative Chris Stewart introduced the Social Media Child Protection Act this week into Congress that would make it unlawful for social media platforms to provide access to children under the age of 16. First Amendment says no on this one, unfortunately Representative Stewart. We've been through this before. It turns out children actually do have First Amendment rights and free speech rights as well. And we went through this with the video games when California tried to ban video games, violent children from buying violent video games, and the Supreme Court said no, that minors are entitled to significant measure of First Amendment protection as well.

There's no sign that this is anywhere near passing, but it's worth pulling out about just the dangerous escalating path that we're on in the rhetoric here around TikTok and social media more generally. This is not just about TikTok, this is about social media and absolutely there are legitimate concerns for children, young children, being on social media, but complete bans like this are not only almost certainly unconstitutional, but also not the right way to go about it.

Alex Stamos:

Yeah, it is shocking how quickly US senators and members of Congress have forgotten their... I'm pretty sure they just took an oath to the US Constitution like weeks ago. It should still be burning in their palm, that feeling of having their hand in the Bible and then swearing to protect the US Constitution and how quickly they forget that in America we don't do things like this. So yeah, it's shocking.

Evelyn Douek:

You mean they don't wake up in the morning and read it through once over breakfast? Is that not how Americans are about their constitution?

Alex Stamos:

I guess all these senators that pull the pocket constitution out of their pocket to wave it around during hearings aren't actually reading that. Yeah, yeah. Or definitely the case law that is attached to it.

Evelyn Douek:

It's actually a notebook with blank pages in it. It looks good. It looks good to wave around. Speaking of performative congressional work, so the hearings, let the hearings begin. So on Wednesday this week, the House Committee on Oversight and Accountability will hold a full committee hearing titled, Protecting Speech from Government Interference and Social Media Bias, Part one. Twitter's role in Suppressing the Biden Laptop Story. The Part one is especially grown inducing here and a good indicator of what's to come this period. The witnesses at this hearing, Vijaya Gadde, the former chief legal officer of Twitter, or in some circles known as the Chief Censorship Officer, I believe. James Baker, the former Deputy General Counsel of Twitter and Yoel Roth, who we've talked about many times on this episode as well, global head of trust and safety at Twitter, including in the early Musk days. I don't even know what to say about this, Alex. A drinking game or a bingo game, to get us through this one, will you be putting time aside to watch it? Is anything productive going to come out of this?

Alex Stamos:

Well, I'm definitely be watching it. I think it's important for us to see the direction that's going. And I have both talked about that there definitely should be limits to government job owning, but in the end, Twitter made a decision, I think a First Amendment protected decision as to what content it was going to carry and what counter speech it was going to post on content that it disagreed with. And for the government that members of the House representatives are part of the government, I think they forget this, for them to use taxpayer dollars to subpoena these people and to effectively try to punish them by yelling them on television for making a First Amendment protected decision, I think it's totally inappropriate.

If they're going to bring them up there and only ask about what interaction they had with the government, that's fine. If they're going to ask about First Amendment protected speech decisions, I think that's inappropriate. And I think, I hope that they say that. I hope Vijaya, especially as a lawyer, points out that what they're doing here is exactly the kind of job owning that has been criticized by conservatives in the past, and that Congress is in no way immune from the strictures of the Constitution and their responsibility again to protect the Constitution, including of themselves.

Evelyn Douek:

We should count how many times the committee jawbones in its hearing about how bad jawboning is. I'm guessing double digits is my prediction on this one.

Alex Stamos:

Well, again, if you're going to have a hearing on what did the FBI say to Twitter, I think that's a totally appropriate thing. It's an appropriate thing for the Judiciary Committee, which is there's already a committee that oversees the FBI. So instead of mean the special committee on Deep State groomer speech, Silicon Valley people, but if they want to ask those questions, it's great, but then the questions you ask should be about the government interaction. It shouldn't be trying to attack them and trying to embarrass them, or trying to draw... What these members of Congress know is by pulling them up there, there's going to be clips of them on Fox News, on Newsmax. There's going to be clips of them on Glenn Beck, and [inaudible 00:27:41], and on Substack's, and on Telegram channels and on Truth Social. And there's going to be more death threats to these people. There's going to be more attacks against them. There's going to be more personal punishment of them. And that is part of the goal here, is that personal destruction of individuals where they disagree, again with their First Amendment protected decision.

Evelyn Douek:

Congressional oversight over some of the most important technology companies and speech platforms in history is important and necessary and good. But it's just been unfortunate from the history of watching these hearings, this skepticism, this sort of dread that we feel about this hearing is not coming from nowhere. It's coming from experience of having watched many an unproductive hearing over the last few years. Anything else for our listeners before we get our bingo cards ready for that one on Wednesday?

Alex Stamos:

Well, you plugged another podcast, so I feel I should be able to too. This week I'm on a podcast called Dune Pod, D-U-N-E. So I don't know how to explain this. Jason Goldman was in the White House. He is still President Obama's tech advisor. He has a million Twitter followers, and he's a big fan of Frank Herbert's Dune series and started a podcast that was all about the Dune movie and the books and stuff. And now they just do '80s and '90s pop culture and him and his partner H, we recorded a episode about hackers, the 1995 seminal movie, as Jason said, "Changed a generation."

And anyway, it's a horrible movie, but it's a funny podcast. We talk for a couple of hours and rip the whole thing apart and talk about Johnny Lee Miller's accent and Young Angelina Jolie and such. So if you're looking for lighter fair than what we cover here on Moderate Content, because I think, so we've talked about the Constitution and jawboning and a lot of child sexual abuse material. So if you're looking for something that is a little more fun than that, than Dune Pod is a good listen all the time. And you can hear a little bit more of me with Jason and H this week. T.

Evelyn Douek:

Hat's right after you've eaten your vegetables of listening to this highly educational, but just probably depressing content. You can go in hear Alex. Alex's fun side, which we never see here on Moderated Content, and that has been your episode for this week. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst extraordinaire at the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks to Alyssa Ashdown, Justin Fu, and Rob Harton. See you next week.