Moderated Content

MC Weekly Update 1/23: A Dramatic Escalation in India v. Platforms

Episode Summary

Evelyn and Alex discuss what should be a massive story about India's orders to platforms to take down content related to a BBC documentary that is critical of Prime Minister Modi. They also discuss the UK's Online Safety Bill; ChatGPT content moderation; the Republican battle against Gmail's spam filters; Trump's pending return to mainstream platforms; TikTok "heating" videos; and an update from Courtroom Corner. Thanks to our new sponsor, Gutter.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn:

We have some huge stories for you this week. It's been a busy week, but first a word from our sponsors.

Alex:

Do you live in Northern California? Are you still draining out your backyard lake? Then why don't you consider Guttr G-U-T-T-R. Guttr, started by two Stanford GSB graduates only sells the finest in machine learning optimized storm drains. Guttr, we drain away your pain as long as your pain is in the form of too much water. Go to Guttr.com/moderatedcontent for 20% off on your first storm drain purchase.

Evelyn:

As a new owner of gutter AI enhanced drainage, I have to say it really has made such a difference to my daily experience.

Alex:

Excellent.

Evelyn:

Hello and welcome to Moderated Content's weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. We have some huge stories for you this week. It's been a busy week, but first I actually haven't seen this get as much coverage as it should because this is a huge story and it goes back to the old theme that we have on this podcast, which is that India is by far the most important jurisdiction that people should be watching in terms of the future of online speech. Over the weekend, the Indian government ordered YouTube and Twitter to take down videos and over 50 tweets about a BBC documentary that is critical of Prime Minister Narendra Modi. Now the series addresses the 2002 riots in the Western Indian state of Gujarat and Modi was the chief minister at the time, and it was nearly 800 Muslims and over 250 Hindus died in the riots and the documentary basically is highly critical of Modi's governance during the time and what he did or didn't do to stop it.

This has not been subtle. India has used its new powers under the new 2021 IT rules that gives the ministry the authority to take down posts that it deems threatens the unity, integrity, defense, security or sovereignty of India or friendly relations with foreign states or public order so you can see how many extremely broad terms are there. Pick your favorite and again, the ministry was not subtle about this so the advisor to the ministry was tweeting about this. It was very open that the reason why this documentary has been taken down is because it is hostile propaganda and anti-India garbage. It "lacks objectivity and reflects the BBC's colonial mindset." Both YouTube and Twitter have complied with directions so this is scary stuff. Alex, what's your headline reaction to this?

Alex:

Yeah, this is a huge deal. Governments have been trying to censor social media for as long as social media has existed and we have had a long history of government sending orders for companies to take content down. This whole podcast is about that conflict and the fact that this is a conflict that's been going on for years but has really intensified over the last two, three years. Often when companies have complied, they've tried to comply in a way that is minimally impactful so the traditional way of doing this is to only block content on an IP basis to the location in which the order comes from. It is also to resist until you're absolutely forced to. I remember when I was at Facebook, we were facing a lot of issues with the Thai junta using the law to try to take stuff down as in their eyes insulting the king, but really just content calling for democracy.

The way Facebook handled it was to resist, resist, resist to fight it in court until there was a final court order and we were on the verge of staff in Thailand being arrested, in which case then it was just geoblocked in Thailand. In the end, governments have sovereignty. They have power. They have the ability to send people with guns to arrest your employees, but what you can do is you can make it really hard for them. I don't see any evidence of either YouTube or Twitter making it hard for India in this case. Now some of the tweets are still up to their credit, so it's not totally clear how Twitter is complying. Twitter might be doing geoblocking. YouTube has just completely pulled the video and this is a big deal. It's a big deal because there's no cover here. Often you have governments try to and it goes sideways at this.

They're just straight up saying, this thing is critical of our prime minister. We do not like it. We want it gone and it's gone. I really want to see comment from YouTube because I think this is a very cowardly act by Google especially, and it does go right to when we talk about Twitter, the exact challenge that Musk is going to face and that India currently banned sales of Teslas. Elon has been trying really hard to open up the Indian market. That conversation with Modi is definitely going to be intertwined with the conversation of how well is Twitter complying with these kinds of censorship requests. Back to the Twitter files, if you're concerned about government censorship, this is absolutely something you should be a hundred percent concerned about because this is not subtle or jawboning. This is a straight-up legal order from a democracy to say take down a BBC documentary.

This is not some kind of crazy person writing this, some stalker. Who's stalking Modi or doxing Modi. The BBC did a documentary on Modi and they got it censored. To me this is a huge sea change. This is a big, big deal and I think we're totally right to bring it up and I hope it gets more discussion because to me, this is as big as net CG with Germany where the Germans broke the seal on getting Facebook and other companies to do what they want from a hate speech law. India just being able to censor YouTube, that is a huge deal.

Evelyn:

Right. We've been forecasting the battle between India, the platforms for a long time, and this is it. This is the quintessential fact set that you would want. This is a BBC documentary, as you said, highly researched, incredibly reputable about political facts. It is a historical documentary. This is core free speech, important to democracy as it gets, and the order is as unambiguous as it gets, and the compliance has been as unambiguous as it gets. This should be a huge thing and I hope that there's lots of conversation about it. I hope as you say, this is Musk's moment to talk about his free speech credentials and where he draws the line and I also hope we see some comments from the governments here, the UK government, the US government. This is not something that actually should just be left to platforms by themselves. This is a geopolitical crisis so it's something that they should be talking about.

Alex:

Although as I think we're about to discuss, UK government doesn't have a lot of legs to stand on here when it comes to internet censorship.

Evelyn:

Very true and we will definitely...

Alex:

That's what we call a soft pitch here in the industry, Evelyn.

Evelyn:

Yes, let's jump then into the online safety bill that the UK is pushing through government at the moment. We're not going to do a deep dive on it right now because this has been moving so fast. There has been so many twists and turns. Every time I have tried to get up to date on what's happening with the online safety bill, they cut a massive part out of it or add another massive part into it, but I do think it's something that everyone should want on their radar, and there have been some big developments in the last week or so.

This is a bill that has been in the hopper for years and it was unclear whether with the new government it would be moving forward and it's clear now that it is. In the last week, a bill, there's been added criminal liability for tech executives for the threat of a two year jail sentence if they persistently ignore Ofcom, the media regulators, enforcement notices, telling them that they have breached their duty of care to children, and even more chillingly, one of the offenses that has been added to the list of illegal content that platforms must proactively prevent, repeat, proactively prevent from reaching users is video footage that shows people crossing the channel in small boats in a positive light. The fear here says the cultural secretaries are posting positive videos of crossings could be aiding and abetting immigration offenses. I mean, this is the country of Allwell really doing him proud with this one.

Alex:

Yeah, lest you thought the online safety bill was about the safety of children online. Here we have, again, just straight up political censorship that you have a conservative government that does not like immigrants crossing the channel. You can argue back and forth on that, but it is certainly a discussion that needs to happen on the safety of migrants. How do you have an immigration policy, the relationship of the UK to Europe, all incredibly complicated issues and you know, just have Ofcom come in from the corner and jump off and say there can't be any YouTube videos that are at all positive about migrants on boats. It's completely, totally ridiculous.

I've been somewhat disappointed in a bunch of American advocates who have been really cheering on the Brits because you know, you continuously have the situation where people just want to stick it to the tech companies and anything that's bad for big American tech companies must be good overall and not think about, well, maybe these countries, including democracies, including allies of the United States, have ulterior motives that are not completely pure and here's a great example of that. Now that doors opened that the UK can censor arbitrary content and they have the tools in place to do so. There's nothing that keeps them from just adding stuff, adding stuff, adding stuff to that list. Perhaps I think this is what you and I should dive into. We probably should get a UK speech lawyer.

It would be interesting to hear about how much of this does parliament have to do and how much can Ofcom just make a decision. Is this the kind of thing that Ofcom can just decide based upon let's say there's a terrorist attack and they just decide that anything that takes the terrorist side in any way or anything that talks about the failings of the government to stop the terrorist attack. That's where you get to Moti level censorship and the UK is definitely on the way.

Evelyn:

Yeah, we will definitely be following this much more closely as it goes forward. As I understand it, the bill is now going to go through a lengthy journey through the House of Lords so this isn't over yet and we will keep track of it, but yes, Ofcom is going to be enormously empowered and Ofcom feels like the perfect name for an agency with such speech power.

Alex:

Talking about Orwell. Ofcom, could definitely have been in 1984.

Evelyn:

It's great, it's patchy.

Alex:

The House of Lords, that famous body of libertarians and rabble rousers. There's actually one Lord who I think is going to be fighting this, but he might be very, very lonely, sir Richard, who I used to work with at Facebook, the Lord of Hellum I hope is going to be able to fight this a little bit.

Evelyn:

Oh, we have all of the recipes for a great Hollywood movie at some point though. Sir Richard standing up for online speech alone in the House of Lords. Okay, in other breaking news, AI is not magic. There was a pretty good story in The Time in this week about OpenAI, which is the creator of ChatGPT, the AI text generator that has taken the world by storm. We've talked about it at length on this podcast before, and it dove into the human continent moderators that they employ in Africa and one of the reasons why I think this is an interesting story to cover is the reaction has been a little amazing to me. People are kind of surprised that ChatGBT, which people have been pleasantly surprised is not as racist or as terrible as it is often feared to be and as many of these tools often end up being.

It turns out the reason is not because AI has developed a heart and sense of morality in the last year or so, but because the company has been really proactive in investing in trust and safety from the start and that often means or almost always means humans because AI cannot yet do trust and safety work by itself so of course ChatGBT relies on human moderators. Now there are all sorts of trade-offs with this that we're going to talk about and I think this is something that I wish people talked about more because often these conversations happen in isolation with each other, which is on the one hand, AI is really terrible at content moderation. It doesn't understand context. We need more humans in the loop and on the other hand, wow, this is really horrific, terrible scarring, psychologically damaging work that we are asking people to stare at the worst of the internet all day every day.

Now, another important part of this story is that these workers often work in terrible conditions. They're paid $2 an hour. They aren't offered psychological support that they would need, and that's a really important low-hanging fruit for fixing this problem, but the problem is kind of intractable in understanding the trade-offs here between trust and safety and human moderators, but curious for your thoughts, Alex.

Alex:

OpenAI, like we talked about last week, has been of these companies I think the most proactive in trying to think about how their platforms can be abused and then coming up with protections for that. In the end, just like with content moderation, the people who might want to abuse machine learning models are incredibly creative and we've seen that with ChatGPT where you can say things like pretend you're a racist or write a simulation of what people would say if they thought these things. OpenAI has been very quick to respond to those things and now we know one of the reasons why is they have this content moderation center of people who are looking at all this stuff probably have the ability, I expect the model has the ability that if it's given something that it does not detect as being out of bounds but is somewhat sketchy, that it puts that in a queue and then a human being goes and makes a decision for the AI of whether or not it's policy violating or not.

In the end you're going to have to have a bunch of people look at both the requests and the output and then decide whether or not they're inside of OpenAI's policies. This comes at an interesting time because the other thing we've seen over the last couple of months is a real kind of creation of a culture war around AI ethics. It took a while for content moderation to get to the culture war perspective where somebody's kind of position on bigger political issues is determined whether or not they're pro content moderation existing. For a long time people argued about content moderation, but the basic assumption was it was something you had to do. We've really kind of short circuited this on AI ethics.

You've had [inaudible 00:13:37] who's a prominent investor, used to work at Facebook and a number of other people talking about how AI ethics is effectively unethical itself that they're kind of arguing if you hold back the development of AI and machine learning because of your concerns over racism or impacts or other kinds of issues, you're doing something unethical so we're kind of entering into this cycle I think much quicker than we did when talking about speech moderation on large platforms.

Evelyn:

The content moderation was come to every technology. This will be a recurring cycle. My research agenda is secure. There is no escaping content moderation debates and it also comes at an interesting time because a lot of these contractors are pulling out, including the contractor in this case, Suma in Africa Time also reported last week that it is shutting down as Meta's third party contractor and is moving away from policing harmful content because it's so fraught and it's now defendant in a lawsuit because of the way that it treats its employees. Suma may get out of it, but we are not getting out of it. This problem is not going away so something to watch. Quick update on a story that we have talked about a couple of weeks ago about the RNC's complaints about Google's spam filters. The Republican National Committee alleges that Google's spam filter is biased against it because more Republican emails go to spam.

The Federal Election Commission this week tossed out complaints from the RNC saying that Google has credibly supported its claim that a spam filter is in place for commercial reasons and thus did not constitute a contribution within the meaning of the Federal Election Campaign Act so this is not an illegal in-kind contribution. It's the right decision, it's clearly the right decision. This is a weird situation. The RNC is still suing Google over this, which is a lawsuit we've talked about before. Google also has set up this other program that it has for registering political emails and the RNC has not signed up to that program so this is really cultural stuff. The FEC clearly came out the right way here, but I don't think we're going to see the end of these complaints.

Alex:

Yeah. I hope Google holds fast here. There is this discussion of them settling effectively of coming up with a deal where Democratic and Republican party operatives can get spam delivered. Anybody who's ever given five bucks to a candidate, then you get notified every time somebody runs for dog catcher in Podunk County, Arizona to go raise money for them so I do hope Google continues to take a pro-consumer line here and stand up against what is effectively political blackmail by very powerful people who want to bypass spam and have the ability to regulate Google.

Evelyn:

Speaking of spam from political candidates, Donald Trump has been in the news again this week about his social media accounts. The Rolling Stone has reported that he and his campaign are laying the groundwork to return to the major social media platforms. It's been peaceful and quiet over there recently, but one of the reasons that that may have been was a exclusivity term that Trump had with his platform Truth Social that required him to post first to Truth Social before posting the content to other platforms. That deal is up in June and Rolling Stone is reporting that he has absolutely no interest in renewing the deal and is accepting ideas for the big return and his first tweet back on Twitter. Alex, any ideas how he should make his glorious comeback?

Alex:

Yeah, I mean it boggles the mind of the opportunity he has and it is actually an interesting question of what he's optimizing for here. Clearly he no longer sees true social as a way of making a lot of money. I think a lot of questions about Trump these days are is he actually seriously running for president again or is it just a big grift because I think your decisions of what you would do in these situations is actually quite different depending on what your motivation is. In this case, I think the big question for him is whether he's going to be let back onto Facebook so now we've created a deadline by which if Facebook does not make a decision, they have effectively made a decision. It's also probably a good reason for them to punt until after this because it would be interesting to see if he comes back on Twitter if he jumps and posts on Twitter before he's back fully on Facebook, that will give them a good indication of the kind of risk they're taking if they turn it back on.

Evelyn:

Yeah, this is like the world's highest stake marshmallow test. If Trump can just hold on until June and not tweet, he can return to social media without being sued for breaching hi his exclusivity term and also his chances of getting reinstated to Meta might be better because he hasn't tweeted something. So obviously terrible that it would scare them off from making that decision.

Alex:

Yeah, if I think of somebody who would pass the marshmallow test, it would be Donald J. Trump.

Evelyn:

Exactly. I'm sure that there is footage of him as a toddler just ignoring that marshmallow and waiting for the greater rewards in his future. Good story from Forbes about TikTok this week. Forbes reported that there's a secret button at TikTok that allows staff to handpick specific videos and supercharge those videos' distribution. The practice is known internally as heating and it pushes them out to a certain number of views. One of the big differences between this and what happens on other platforms is it's completely unlabeled. This doesn't say promoted post or you're seeing this because X, Y, Z liked it as you might see on many tweets these days, especially now with the for you feed, but it just turns up more in your feed.

Now one of the things that TikTok has often advertised is how good it's algorithm is at working out exactly what you want and how sort of democratic it is. Anyone can go viral and it turns out that there is for at least some portion of what people are viewing this handpicked way of pushing out these videos. It of course is open to abuse. I mean the big concern that your mind jumps to is that the CCP is using this for political purposes. Forbes didn't have any evidence of that, but there were people using it to heat their own or their spouse's accounts to try and get more views so happy anniversary honey I got you 3 million views, so this is not a good look.

Alex:

No. Another huge scoop for Emily Baker White who has some incredible source inside TikTok for those of you missed it, she's the Forbes reporter that TikTok then utilized another secret button that allowed their internal audit department to try to track her by IP address to try to find who her sources were in case she was physically meeting with them and on the same network. Turns out she was not discouraged by the idea that Chinese employees were stalking her using PII. She is right back at it. The other funny thing about TikTok is with all this stuff, it just feels like they read all of the coverage of tech companies from 2017 to 2022 and every time there was kind of an overblown claim of an American tech company doing something kind of obviously evil instead of like, oh, we should avoid that they put that on the list of product features they should build, right?

People have always thought that Facebook or Google or whomever is just arbitrarily uprating people's content because they like it or they like the person or the person paid them off. There's always these crazy conspiracy theories of my hated person on YouTube is paying off Google somehow to get their ranking up and mine down, and TikTok just decides, oh, that's a fantastic idea. Let's build that as a product feature and then apparently have no auditing of it so people could just do it for their spouses having dances on TikTok and they can all of a sudden make them go viral. Yeah, not a good look from TikTok and like you said, what would make it a much more serious would be if there was a demonstrated political issue in which this got used. Right now there's no data of it, but I also don't know how we'd be able to know at this point.

Evelyn:

Yeah, it really has been a bad run for TikTok. Oh, no, no, no, we would never use your personal data to do bad things except to track reporters and Oh no, no, no. We would never use our algorithms to promote anything except for when we really, really want to so they're having a rough time and they've been promising more transparency about all of this stuff so let's see if that comes to fruition. All right, to our courtroom corner segment, I haven't obtained a sound effect for this yet. I don't know. Do you want to pick one random one, Alex? Let's see what we've got.

All right. The breaking news this morning is that the Supreme Court has punted on the net choice cases. These are the cases we've talked about at length on this podcast including a longer episode with Daphne Keller and Genevieve Lakier. These are blockbuster cases. They arise out of Texas and Florida with social media laws passed there that would place significant constraints on platform's ability to do content moderation. These are First Amendment challenges to these laws and the TLDR on them is that they could be the most important first Amendment cases in a generation basically the way that they could transform the way we think about free speech and the government's power to regulate companies like social media platforms. The Supreme Court is widely expected to take this case because two sets of court of appeals reached different conclusions on the Texas and Florida laws.

Today, this morning the court invited the solicitor general of the Biden administration to file a brief with the views of the federal government. As Steve Ladik put it on Twitter, a professor from Texas Law School sometimes the court calls for the views of the solicitor general because it's genuinely interested in what the DOJ has to say and sometimes it does so just to hit the pause button on cases it knows that it's going to grant cert in and this is the latter. This is the kid trying to just stay up just a little bit longer before going to bedtime, just sort of asking questions to put off the inevitable decision.

This case will most likely be decided by the Supreme Court, but now it looks like we have a few months before they take it up. One of the reasons could be because the court wants to get Gonzalez and Taamneh out of the way. These are the two section 230 cases, but who knows why the court is doing this, but this is good news for Evelyn who has been struggling to keep up with all of the legal news so I know that this is what all of our listeners were most concerned about. Evelyn is very happy for her work-life balance from this decision from the court this morning.

Alex:

I guess this means there's not the opportunity for them to have the most hilarious outcome possible, which was to both say you are completely liable as a platform for all the speech, strictly liable for everything, and you're required never to take it down, which would effectively be both ridiculous and then also destroy the entire internet industry in the United States.

Evelyn:

Alex, that is absolutely still possible, ye of little faith. The court can definitely still do that. It just will do it over a slightly longer timeframe rather than making both decisions on the same day. Gonzales v. Google could definitely still come out that platforms are liable for amplifying content and the net choice cases could still come out six months later saying, oh, and also you have to keep up a whole bunch of content. That world is still in our future. Good luck platform lawyers. In Gonzalez's updates nearly 50 amicus briefs were filed last week in support of section 230 in the Supreme Court. These are friends of the court briefs where third parties can file arguments giving their point of view. Now, I did not spend my entire weekend reading all 50 amicus briefs. I apologize listeners, but I did flick through some of them.

Some of my favorites were from Reddit where there were moderators from the subreddit r/equestrian and r/Halaku, which is focused on horse people, horse lovers and fans of equestrian sports and the other focused on a specific field of computer science and a particular rock band. These are moderators who are very concerned about the effects that the ruling might have on daily internet users, daily mods on Reddit for whom recommendations are an important thing. Yelp's brief is basically our entire business model depends on being able to make recommendations. If you do this, we're destroyed. Senators Widen and Cox who drafted Section 230 also filed a brief saying we were there.

We know why Congress enacted section 230 and targeted recommendations were absolutely supposed to be covered by the law. Meta also weighed in saying, you know what? We use algorithms to help enforce against terrorism and not just to promote terrorism. It's actually a really important thing to amplify the content that you see. It has this lovely passage in there where it talks about how countless people have met their spouses, tracked down lost relatives, secured new jobs, taken up new hobbies, donated to new causes, and started businesses, found solace with others, and even established new religious or spiritual movements owing to services like Facebook. New religious and spiritual movements sometimes includes QAnon, but good for them putting this positive spin on it.

Alex:

That might be for the Supreme Court, the fact that Facebook's the home of QAnon might be a positive. It is interesting to see that a lot of these amicus, they are extremely explicit in kind of appealing to the conservative super majority. I thought specifically talking about if you do this, we won't be able to take down terrorist content. If you give us this liability, we will have to censor conservatives effectively. They said everything but if you like Glen Beck, you cannot vote for this because we are not going to be responsible for Glen Beck's crazy rantings. These are definitely different briefs than you would've seen five years ago, right?

Evelyn:

Yeah. Arguing before the Supreme Court right now is exactly a matter of strategy and knowing your audience and in some of these cases it's actually tricky because you also want to try... If you think as some people do that you've already lost two conservatives, potentially Justices Thomas and Alito, then you also still need to pick up some liberals so you're having to try and walk that tightrope of trying to play both sides.

Alex:

Who are the swing votes on this? Is it Roberts and Gorsuch, do you think?

Evelyn:

I mean, one of-

Alex:

Maybe Kavanaugh?

Evelyn:

Yeah, one of the weird things about the politics around these issues right now is it's totally not clear and we don't know, for example, what Justice Barrett thinks about any of this. We just have absolutely no idea so that is both positive in the sense that this may not just split exactly down party lines, but also terrifying because anything could happen in these cases. Finally, some quick follow-ups to how everything is content, so everything is content moderation corner. Turns out we talked about Leo Messi's win not only of the World Cup title, but also of the title of most likes on Instagram and it turns out that this is an instance of coordinated authentic behavior. It was not a totally natural organic circumstance, but a fan had encouraged others to like Messi's posts and crucially, in order to claim the title to, unlike the picture of the egg that had previously held the title of the most liked Instagram picture. Sad Day for the Egg, but a beautiful moment for Messi fans.

Alex:

Except Messi's fans are now claiming the egg is flopping by throwing itself on the field and saying it got fouled so the Yellow card for Messi's fans on this one.

Evelyn:

Yeah, it's really got yolk on its face there. Thank you.

Alex:

We'll be here all week folks.

Evelyn:

Exactly. Lucky you. Final Story is in something totally unsurprising. The Taliban are spending $8 to get a blue badge on their Twitter accounts now that the verified marks are open for purchase, which totally unsurprising, but then the Daily Beast reported that Twitter is apparently quietly removing the marks from Taliban representatives. I don't even know what to say. I don't even know what blue check marks mean anymore. They mean some version of we paid $8 and we're verified, but also Twitter doesn't dislike us enough or isn't getting enough bad headlines to remove our blue check mark.

Alex:

I think there's only one thing we could say about this.

Once again, Musk's theory here is that content moderation is bad until he has to do it and he is once again recreating all of the decisions companies have made, but doing so publicly and in a vacillating back and forth. In this case, like you said, the only thing we can prove a blue check mark is that you have eight bucks. We have examples of spammers buying them. You've had a bunch of fake accounts that are pretending to be very important people buying blue check marks. It has been exactly the disaster that everybody told him it was going to be. In this case, they're actually acting. It's a Taliban.

I mean, I think an interesting question here is how does the blue check mark thing interact with the sanctioned individuals and organizations lists that the US government maintains? This has been a super controversial thing of can you provide free internet services to the Islamic Revolutionary Guard Corps to a sanctioned individual in Russia? Certainly you probably can't take eight bucks and then give them something of value so I wonder if that's also something that's leading in here is the variety of sanctions that the Taliban still find themselves under because there's actually money changing hands that that's one of the complicating factors.

Evelyn:

Yeah, we're having these massive Supreme Court arguments about whether slightly amplifying videos unintentionally is a recommendation that falls outside section 230 and meanwhile, Twitter's just handing out blue check marks to verify Taliban leaders-

Alex:

For money.

Evelyn:

... yeah, for money.

Alex:

Right.

Evelyn:

Exactly.

Alex:

Please take your blood money, convert it into Bitcoin and send it to this address. Hey, it's also kind of funny because it's like if CatTurd2 was a fan of the Taliban, then there would already be a big back and forth of Musk saying that's concerning, I'll look into it, but because it is a group that doesn't have a lot of supporters in the United States, that's not happening. I think this will get much more fascinating when you have like the QAnon folks and stuff getting blue check marks under pseudonyms because the kind of AltRight people that Musk has been really taking his cues from are going to defend them even when the trust and safety team wants to take him down because they're being related to a lot of real world harm off the platform.

Evelyn:

Excellent. There's a little teaser for something we will no doubt be discussing in future episodes. We joked last week about doing a weather segment in space of a sports segment to close out the episode, but now that the rain has passed, I don't know if it's really fair to just subject everyone else around the country to a reminder that it is sunny, temperate and blue skies here in the Bay Area.

Alex:

It is still beautiful in the Bay Area, but that's not a reason why not to buy from Guttr. Guttr, we drain away your pain, Guttr.

Evelyn:

So long as that pain is too much water.

Alex:

Guttr.com/moderatedcontent.

Evelyn:

This product does not promise to join away any pain that is not related specifically to too much water in your roof. Okay. With that, so this has been your episode of Moderated Content for the week. Actually, we have one more episode coming later this week with the trust and safety team and policy people from Zoom the platform so look out for that in your feeds on Thursday. This show is available in all the usual places, including Apple Podcasts and Spotify and show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino, policy analyst at the Stanford Internet Observatory and it is produced by Brian Pelletier. Special thanks also to Alyssa Ashdown, Justin Fu and Rob Huffman. See you next week.