Moderated Content

News Update 2/16: The Boy Who Cried Deepfake?

Episode Summary

Alex and Evelyn talk about the latest news cycle—is the long-anticipated Deepfakeapocalypse finally here? They look at what else will be different this election year, including changes in platform coordination and transparency. And an update from the legal corner.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Let’s Get Meta

X/Twitter Corner

In Full Transparency

Legal Corner

Join the conversation and connect with Evelyn and Alex on your favorite social media platform that doesn’t start with “X.”

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn: Talking about Taylor Swift is even more scary than talking about politics, the chance of offending people.

Alex: I have no opinions except positive ones about Taylor Swift. Please do not send mail.

Evelyn: That's right. We have enough enemies. We do not need to make the most powerful enemy of all with Taylor Swift fans. Very happy for her and her fairytale story.

Alex: Yes, I'm sure everything will work out perfectly and she will not do a platinum double album on the coming break-up.

Evelyn: See, but that's too risky, Alex, you're getting very close.

Alex: Those are some great YouTube and TikTok videos.

Evelyn: Yeah.

Alex: Oh God, I did it.

Evelyn: That sounds suspiciously like you have negative thoughts.

Alex: Just because I'm looking forward to 'You Fumbled My Heart' as her hit single.

Evelyn: It's poetic.

Alex: Taylor, give me a call. I'll help you, I'll help you do some songwriting.

Evelyn: That's so generous of you, Alex. She definitely loves these patronizing offers for men to help her professionally. It's great. That's going to work out very well for you.

Alex: Thanks, appreciate that. Thank you.

Evelyn: Welcome to Moderated Content, stochastically released, slightly random, and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. This is our first news update episode in a while actually, Alex, I don't think we've done one since pre-holidays.

Alex: No, every episode has been on a very special episode of Moderated Content.

Evelyn: That's right.

Alex: Yes, it's been...

Evelyn: I mean, good news is there hasn't been much going on in this space.

Alex: Yeah, it's been a really quiet January and half February.

Evelyn: Exactly. So thank goodness there's finally some content moderation and trust and safety news for us to talk about because gosh, it's been hard to find things lately, but hopefully we'll have more of a regular cadence from now on. The place I want to start, I mean, the topic that has been dominating conversations lately over the past few months really has been like deepfakes, deepfakes, deepfakes, deepfakes. We were saying before, I want the quintessential damsel in distress scream sound effect here because there's sort of this idea that the deepfake apocalypse is finally here this year.

Alex: If a listener wants to send in a damsel in distress moral panic scream, we would love to have that donated. I tried to get Evelyn to do it, she will not do the scream.

Evelyn: Yeah, can't imagine why I declined that one. But yeah, I mean I feel like I've been hearing about the deepfake apocalypse and its impending appearance pretty much every year for the last six or seven years. I think the 2018 midterms or the deepfakes are coming, the deepfakes are going to completely distort our reality for nearly a decade now, not quite, but over half a decade. And now we're in this whole new news cycle about it. Now, of course, in one way that's not surprising. We've obviously seen this significant rise in the capabilities of generative AI over the last year in particular.

And so we may genuinely be on the verge of a complete paradigm shift and that this actually finally causes the problem that people been writing and talking about for the last five years. But I also feel a little jaded or exhausted by the constant need to be worked up about this. There have been, of course, a number of stories lately that have really catapulted this and put this center stage. So of course in the lead up to the New Hampshire primary, there were reports of a robocall impersonating President Joe Biden that was discouraging people to vote or more specifically telling them to save their vote for November rather than voting in the primary.

Alex: Is that how votes work as a law professor? If I use it in March then I'm not allowed to use it, like I get one vote, like an actual...

Evelyn: One vote.

Alex: Yeah, like a piece of pottery that the ancient Greeks gave people. That's a... Yeah.

Evelyn: Exactly. Spend it wisely, folks. It's written into the constitution, one person, one vote. It was meant extremely, literally, I mean even that's not in the constitution, but anyway, the phrase one person, one vote, what did you think it meant? Anyway, yes, so obviously this was intending to discourage people to vote. It's unclear how many people received the calls. Estimates are like 5,000 to 20,000. It's also not clear if this at all influenced anyone to not vote that would've otherwise vote. The state AG has announced a criminal investigation and has located the company in Texas that is alleged to be behind this. So that has been moving forward and I think created significant alarm around the possibility of deepfake audio and its capacity to influence elections.

And then of course, I think the other big story, which is why we were talking about Taylor Swift in the B roll is because there was, at the end of January, these pornographic deepfakes of Taylor Swift that went viral on X and attracted, I think one of the most prominent examples had something like 45 million views and was up for nearly a day before the post was removed. And that obviously created significant concern and generated a lot of attention as well. Now, out of these two abuses of AI, I know which one I'm more concerned about in general, but curious for your thoughts, Alex, on if this represents genuinely a new moment?

Alex: Certainly to me, the nude deepfakes is absolutely the abuse I'm more worried about. The robocall, that's disturbing, but we have had fake calls to discourage voting for years. Some people kind of famously are going to jail for trying that in 2020. And as you and I have discussed, if we're going to have deepfakes that are related to politics, you're going to have a massive all of society response to it. In this case, the FBI's investigating, they're going to find these guys, they're going to put them in jail probably, or they're going to turn out to be foreign and they might get indicted and not be able to travel except to the beaches of Crimea. But if they're domestic, they're going to get found, and they're going to get arrested. Whereas the Taylor Swift example, Taylor Swift has a massive PR group behind her. She has her lawyers who are famously litigious.

I don't think we're going to talk about that so much, but they did a little Streisand effect where they threatened the student who tracks her private jet. And she's so famous that Twitter banned the entire term 'Taylor Swift' from Twitter because she had deepfakes going. It's a great example of if she was Taylor Smith, if she was just Taylor, a 17-year-old girl in Cincinnati, Ohio at Grover Cleveland High School, she would have none of this backup. And that is the abuse that is really causing harm right now, is that we've gone to the place of where the deepfake nudes, you have the Stable Diffusion 1.5 is being modified by these groups to create nudes. We've talked about this before, David Teal, our colleague wrote a great thing about the creation of CSAM, but on the steps of creating CSAM with AI is the retraining Stable Diffusion to know what naked people look like and using lots of pornography to retrain Stable Diffusion.

That's something that people have been able to do if they could code, if they had their own GPU. And what's happened is a number of people have taken those open source projects and they've turned them into commercial websites where you could pay a couple of bucks of Bitcoin or PayPal and you're able to generate nudes based upon the images you upload. And so it has put it in the realm of every teenager who wants to create an embarrassing photo of somebody at their school, and that's what's going on a daily basis right now, and that's causing a lot of harm because in the end, the people who did the Biden thing, maybe it had some effect, probably had very little impact on Nikki Haley, crossover votes for Nikki Haley in New Hampshire, which was the target there, and we're having this massive societal reaction. Whereas there have been a couple of cases where there's been investigations of the local abuse of mostly young women with the use of these tools. And I expect that represents 1% of the abuse that's happening.

Evelyn: Yeah, sign of a well-functioning trust and safety system of course on a platform is that you ban the entire search term Taylor Swift, which I'm sure was not being used for anything other than these malicious uses and wasn't driving significant traffic and attention on your platform and is a good thing that you want your platform to be able to use for, let's just ban one of, probably one of the most popular searches for legitimate reasons on the platform.

Alex: Right, which in the end probably had very little effect for the people who were looking for the nudes would pretty quickly, it's the innocent searches who are not going to understand the context there and therefore that you have to change the search by a little bit to find what you're looking for.

Evelyn: Right, but to be fair, not to sort of let Twitter or X take all the heat here, 404 Media tracked down where this particular deepfake came from and found it in a Telegram group. And it's just worth noting that it was a Telegram group and most likely, I think they suggested was used by a Microsoft text to image AI generator was the original source of this image. I might be misremembering there, but there's a whole, whether it was Microsoft or another company.

Alex: It was through Microsoft, so it was Microsoft's interface to OpenAI. So it's OpenAI's model, but running at Microsoft or via Microsoft's instance.

Evelyn: Great. So we're talking about a whole bunch of different tech companies along the route here to this harm occurring and no one really necessarily taking the appropriate steps. Certainly, I mean we should talk about Telegram or just note to Telegram here that no one talks about Telegram's role in fostering these communities and not really doing a whole bunch about it. So just worth pointing that out.

Alex: Yeah, Telegram, and then I mean the other company is Discord. I mean that's where Unstable Diffusion is, the thousands and thousands of people are working on generating nudes. And when you look at that community, and again, David wrote a good thing about it, there's a bunch of people who don't want it to be used for abusive purposes, but it is an inevitability if you're building an AI model that's been trained on porn that people are going to use it either for CSAM and/or to create content about real women.

Evelyn: Right. So again, I agree with you, I'm much more concerned about this harm to predominantly women that happens as a result of the abuse of these tools. And in the political context, you have some harm, but there's going to be much more conversation about it, there's going to be much more capacity to correct this. But also we have laws around voter suppression, around misleading advertising, misleading and deceptive conduct and misrepresentations in political campaigns, and I wonder, we were having this conversation about this earlier, I wonder how much of this is a really a new thing here in terms of like, the technology creates another way in which we see these old harms manifesting. But again, I don't know how much to get worried that this is such a big paradigm shift that it's a whole new set of harms.

Alex: Yeah. So to make an argument for it being a big deal on the political side, I think we haven't seen this yet, but my concern would be, we've always had misleading ads, but now let's say you have a super PAC that does a misleading ad about a candidate that has the candidate saying things they've never said, and even if they say this is a parody at the bottom, which maybe is enough to get past First Amendment scrutiny and to protect that speech, then is it going to stick in people's heads that this candidate said or did this thing.

As humans, do we have the ability to see something that looks photo realistic, that sounds very much like, that passes all of our tests as a brain and then to override it because it says at the bottom, this is just a parody. And that's my concern is if that turns out to be effective, that that just becomes the standard going forward is that you're always using AI to generate, and again, it's not going to be the totally illegal things here, it's going to be the legal First Amendment protected ads that have hundreds of millions of dollars of super PAC dollars behind them.

Evelyn: And just to be really clear, I find it hard to imagine a world where any of this could be banned under the First Amendment, especially with requirements to have disclaimers. I mean, again, we have laws and you can ban certain kinds of deception in politics and voter suppression and things like that, but generally fake images and things like that are not going to be able to be banned and I think for very good reason, and as long as there's a disclosure saying this is parody, you could understand why courts don't want to be getting into the business of saying what is reasonable parody, what is not reasonable parody and what kind of speech should we protect and what kind of stuff is just too misleading or not sufficiently clear?

But I guess again, I see what you're saying and I guess I do worry that there is this kind of harm that is created by an association, a mental association with an image that is way more powerful than any other kind of media. At the same time, I think we have to hope that there'll be a society-wide response. Maybe people will update and learn not to be so trusting of their own eyes, which I guess might have other bad ramifications where they don't trust anything that they see. And then just this society-wide response where we try and say, well, this is beyond the pale and this is not the kind of political discourse that we want to engage in, which has always definitely worked out so well. We're policing...

Alex: The American people have definitely punished those who have lied to them in the last several years. I have total confidence in that.

Evelyn: As I say that I see the problem that the guardrails of political debate have not been holding up so well. They've been getting a battering over the past few years. At the same time, again, we've been being told for a long time now that this is coming, that this is going to be the end of all possible reasonable debate in politics, and I guess I will have to wait until I see it and believe my eyes, whether it's deepfaked or not. But in the meantime, the political response and the company response is really focusing on this.

Like we head into this election year and we are seeing that the legislative responses are really focusing on deceptive uses of AI and then we're also seeing a lot of company blog posts and announcements that are focusing on this harm because it is something that people are getting very worried about again. So in the last few weeks we've seen a number of announcements about this. So Meta, I think last week announced it's going to start labeling AI generated images on its platform and is going to be working with industry to start to create common technical standards for being able to identify AI generated images. And I'm curious for your take, Alex, about how likely it is that that's going to be successful?

Alex: Yeah, so this is interesting in that Meta has proposed kind of a standard watermarking framework that a number of other companies have now signed up for. So it's exactly the kind of self-regulatory thing that I've wanted to see. They're about two years too late. It would've been nice for this to exist a couple of years ago. The proposal that they have includes invisible watermarks, but the way that they're done does not seem that they're going to be completely stable against adversarial removal. So it will make it harder to use AI to generate stuff that then you could repost over and over again.

But it does seem quite possible with the current standard to manipulate an image in a way or a video in a way that is mostly invisible to human eyes that does remove the watermark. That being said that now that they have kind of a working group on this, there's been some interesting research in using cryptographic watermarking standards where effectively you have a secret key that is tied to the watermark that makes it very, very difficult then to either test for the watermark or to remove it without having that key.

And it will be interesting to see if they continue down that direction. As of right now, anybody has the ability to detect this watermark, which is kind of the hallmark of an open standard, but any situation where you make a watermark detectable also makes it generally removable, right? Because now you have an oracle of which you can build a loop that you can test over and over again of your removal technologies until they're at a point where your own detection as a bad guy does it. That being said, what we're seeing is a bunch of, the AI world is broken between open source and stuff that's hosted, and what you're going to be seeing is that the hosted commercial products are going to do a bunch of steps to maybe not outrun the bear but to outrun the other camper, that they're going to make it hard enough that you're not going to want to use the commercially hosted products, that you're just going to fall back to open source. And I see that as one of the things that might be happening here.

Evelyn: Right? Yeah, absolutely. I think you've talked about this before as well. It is good to make it harder. The fact that you can't let the perfect be the enemy of the good in terms of, well, we can't stop anyone all the time, sorry, we can't stop everyone all the time. But making it much harder and getting rid of the low hanging fruit to make it possible for everyone to do this at any time is very beneficial. You also wanted to talk about an announcement from Microsoft in this space, I think just a couple of days ago or yesterday.

Alex: Yeah, so right and related to this of the big companies making it harder for bad guys to abuse their platforms is Microsoft and OpenAI publish a blog post together about cyber threat actors utilizing OpenAI's products for their attacks. Now these are not like, you can't tell OpenAI go hack something, right? That's not something they'll do for you. But they have some good examples of real threat actors and in this case they attribute them to China, to Russia, to North Korea and Iran. So those are the big four countries when we talk about state sponsored hacking against the West, of doing things like using LLMs effectively, like super search engines for vulnerability research, for generating content for social engineering.

So this is something that we're actually seeing a lot on the cyber side is actors who generally don't have very good language skills in the language of their victim are now all of a sudden writing fantastic ransom notes that don't have all the broken English or the broken Italian or broken Spanish, it's a lot, they're being much more effective because they're able to use LLMs to generate their interactions with victims for ransomware actors and then for doing reconnaissance and stuff. So it's a fascinating paper and it's a great example of clearly OpenAI is staffing up their security team to do the kind of work that Facebook, Google, Amazon, Microsoft do have threat intelligence teams that watch the use of their cloud products for abusive uses and then will publicly post those and then probably refer at least to Western law enforcement in situations where they believe it's appropriate.

And this whole paper is a elbow thrown by OpenAI at these threat actors of like, don't dare use our products. So again, what I expect you'll do, 'cause these are commercial, this is talking about throughout 2023, OpenAI has all these logs going back for the last several years. They're able to mine for this kind of activity is they're going to probably see a sharp drop-off and you're going to see these people move to the open source because now the things you can do in open source now with the current models is pretty much equivalent to what you could only do with GPT-4 last year.

Evelyn: Right. So turning back to more core content moderation stuff, I mean, we talked about the difference with deepfakes, I mean, one of the things that it also seems different heading into this year's election, it just seems like a very different ecosystem across the board in many, many ways. Obviously we're talking about different platforms than we were four years ago, but the other thing I think is these platforms and how they think about political content and how they think about their role in the ecosystem where I think even still four years ago there was this sense that platforms did want to be the center of ethical debate, or at least were still this place that you went to to get conversation about what's going on and that they understood and wanted to be the new town square, the new public forum, the new center of all of these conversations.

And obviously we saw how that worked out for them and we've seen turns away from that. And the big, big sign, we've been talking about this as Threads has been rolling out over the past months, but Threads made the announcement, which was not at all surprising given stuff that they've said previously, that they're turning away from recommending political content on Threads, Meta's X competitor, and making sure that political content from accounts that you don't follow won't be appearing and won't be amplified in their feed. A small detail is that they haven't defined what they mean by political. So the good thing that that's a nice and easy thing to ascertain when someone is talking about their own personal experience as LGBTQ, is that political or is that personal? I don't know.

Alex: The personal is political, isn't that a slogan?

Evelyn: Yeah, it's a shame no one has ever run into these problems before into fighting political, but you could understand, I mean, the fact of the matter is they don't need to throw a lot of transparency around this. They're not amplifying, and so we're not going to know a lot of what they're choosing not to amplify and what they're choosing to amplify and how they're defining political because all of it's going to be sort of algorithmic and not transparent and we're not going to necessarily see the results that it's different from when they remove a piece of content because, and then you actually have a conversation about what's happened.

We're not necessarily going to see that, but this is one of those things of like you got to be careful what you ask for because one of the reasons why I still am not an active Threads user is because one of the main reasons I use social media is to follow political content to try and stay up with political news. And this is a big loss for me personally, but it's not just about me personally. I also think that this is a big loss for public debate and public discourse where I do think it's not necessarily clear to me that having no politics on social media, even if it was possible, is a better world.

Alex: Yeah. For the question of how are they defining political. The best example we can have to do this is to roll back to the rollout of the political ad archives starting with the 2018 and the 2020 elections in that Facebook went beyond a number of other platforms only had transparency into ads that were being run by official campaigns and PACs and such, and Facebook instead took a much broader set of saying, okay, we're going to have ads that have anything to do with politics, anything to do with the national conversation. And when you looked at that ad archive, there was a bunch of complaints from people because their ads showed up because just like you said, they were kind of general human rights campaign ads or there's a lot of environmental stuff. People are complaining like, well, why is my ad in here if I'm just saying we want to save the earth?

And it's because by definition you end up, if you're going to talk about political issues, you end up capturing a huge chunk of everything that people talk about on social media and Facebook in that situation decided to over pivot. They can't be using that broad in this case because it would be a huge percentage of the conversation on Threads. I think it's a mistake because they keep on vacillating between, is Threads a Twitter replacement or is it text Instagram, and I don't think text Instagram makes sense. Instagram makes sense as a nonpolitical place because people want to see, they're trying to recreate the experience, if you go on Instagram, it's positive. People are like, here I am on vacation here, here are my kids doing something, here's the food I'm eating, right? Here I am in a bikini, more people on bikinis, lots of bikinis. Bikinis have really sold, for a company that has such aggressive nudity standards, the amount of bathing suit shots and people's abs is pretty intense on Instagram.

There's no equivalent, what's the equivalent of the bikini shot on Threads? It doesn't make that much sense. What do people want to talk about in text is they want to talk about what's important to their lives and what's important to people's lives is considered politics. It is your personal identity. It is who you're voting for. It is for the environment. It's like what's going on in local schools? I'm really interested in, there's a big fight in California over math standards and that affects my kids. That's politics, right? And I think it's a mistake. I understand why they're doing it.

If you've been criticized of, you've destroyed all of politics with the algorithm and there's all these BS books and you have the kind of New York Times consensus is that you are at fault for everything. You're like, okay, great, well, I'm just not going to get involved. But you're not going to really be able to displace Twitter unless people are able to talk about the things that are important to them. And if they want, and I think they need to displace Twitter. That's the whole point of Threads. It's like Twitter, but it no longer has, it should not have Chinese bots with paid check marks and it should not have people giving you death threats. That seems like a good product to me. I would use it. I'd pay for it.

Evelyn: Right. Yeah, I mean the political ad archive is a great example because I mean, I don't personally have such a great problem with them having a very broad definition of political in that context where the action taken is transparency and including it, and people get to see what ads are being run as political, but if you take that same broad definition and the action being taken is demoting that or not amplifying it to people, that's a much more problematic outcome to my mind. In fact, the broader, the better when you're talking about transparency, but completely the opposite when we're talking about the kinds of conversations and the kinds of topics that we're removing from or making sure that people don't see as much. And I mean there's one question of what makes Threads more likely to be a successful platform, and then there's also this question of like, well, what do we want in terms of, normatively, what do we want desirably for society?

And even if Threads is able to replace Twitter, take on Twitter or become a massive platform while deamplifying political content, 'cause I mean, quite frankly, I don't care, I don't have any particular investment or otherwise in Thread's success or not success. That's fine by me if they don't succeed. So it's not so much that I think that they need politics to succeed, although I think you're probably right that they do. It's that I don't know that it's a great world to live in where we have these massively successful platforms that aren't amplifying or providing spaces for people to see and discuss the important topics of the day.

Alex: Right. And if people don't want to see politics, what I'd rather Threads do is if they're going to have such an algorithm at first feed, is make it really easy to say, I don't want to see more of this, right? Which it's quite hard. You look right now, I'm looking at my for you feed, my algorithmic feed in Threads, and if I get something from somebody I don't follow, my options here are hide, mute, unfollow, right? What's that mean? Really what I need is I need a button that says, I don't really like this, or I need a mode where I can do thumbs, what I'd love to say is a mode where it just takes my Thread and I say, thumbs up, thumbs down, thumbs up, thumbs down, and I can train the algorithm and then do that correctly. And then the people who want Instagram Threads, which again, I don't know what that means, Instagram Threads, 'cause it's like what's the equivalent of showing off that you have 6% body fat on Threads?

I guess we'll find out, right? But at least people will be able to curate that experience for themselves. To make that curation decision on behalf of the hundreds of millions of people who are using it, I don't think it makes sense. Although, again, we have to think about the international context here of when they're downgrading politics, in the US we see that as you're downgrading discussions of elections and stuff. When they're talking about Southeast Asia, you're talking about downgrading arguments that have traditionally ended up with violence. So that's the other context here is that Facebook has been burned multiple times of the idea of this being a platform for people to discuss whatever you want, that that's very different in a developed country with a fortunate lack of history of political violence versus places where there's significant physical strife between ethnic groups and such.

Evelyn: To be fair, so that we don't get too much feedback, we should note that Threads has said that there will be an option to be able to opt back in to political recommendations, unclear when that's going to roll out in the future. But I mean, it's all about defaults, right? I mean, the question is, I mean, I personally probably will do such, take advantage of this option if and when it rolls out, but I am not your average Threads user, and the defaults matter.

Alex: Definitely the tyranny of the defaults is important. It's interesting because Mark Zuckerberg proposed years and years ago that he really wanted the feed to be that you could basically drag sliders and say, how much nudity do I want or not? How much adult content do I want? How much political content do I want? How edgy can it be? Right? And this is a great opportunity for them to do something like that. It's really weird to me if you go into settings and they're like politics, on, off, is that the only on, off switch there's going to be in the algorithm? It's just bizarre that that's the one thing, instead of building a overall model where you can really train the algorithm to show you what you want to see.

Evelyn: Right. Politics on, off, but also we won't tell you what politics means, so good luck. Just flick this switch and find out.

Alex: Yeah, post more ab shots. Yeah, or more, yeah, you're crushing it on vacation, yeah.

Evelyn: I'm glad you mentioned the international context though 'cause it is always important to pay attention to the international politics, and one story that, I mean, just such a sign of the discourse shift around this I think is that Meta in the last week removed the Facebook and Instagram accounts of Iran's supreme leader, Ayatollah Khomeini, and this barely broke a news cycle, and they didn't have to explain, they removed the accounts for, they said to the press, they're dangerous individuals and organizations' policy and for repeated breaches of that policy, but it wasn't clear what posts exactly led the Ayatollah to finally at this moment in this particular time, breach that policy beyond the many years that he's been on those platforms.

But also just amazing that we are now in this moment, even a couple of years ago where this would've been a whole news cycle, there would've been conversations around this, about these platforms and the power that they wield, and there are many difficult conversations to be had about what platforms should be doing in this context. At the same time, it barely broke through. And so I do wonder how that's going to impact the decisions that these platforms make where they're just, I guess, under much less scrutiny and under much less obligation to explain what they're doing.

Alex: Yeah. Well, what's really interesting to me here is, is this a reinterpretation by Facebook or have they gotten a letter or something from DOJ on the material support for terrorism issue. Should the Ayatollah have Instagram account is literally a meeting I was in when I was at Facebook, and there was the moral, ethical issues, but really, what it really came down to was, there's an American law, are you materially supporting the Ayatollah if you allow them to have a free account? And it was pretty much obvious, like, okay, we can't let them advertise. We can't let them give us money. The Islamic Revolutionary Guard is not allowed to run ads on Facebook.

You can't sell them anything, but is having a free account, and at the time, the interpretation was a free account's okay. So I think that would be a really interesting thing to hear from Facebook is, is this the Ayatollah posted something about the Hamas-Israel conflict that crossed the line, or is this, that's important, but that's a really specific content-based decision, or is this, we are reinterpreting this law because if they're reinterpreting the material support terrorism law, then that is going to have huge impacts because the number of organizations that are sanctioned by the Treasury right now is well beyond just the leaders of Iran and Hamas, but includes Russian organizations, all these kinds of folks who have fallen afoul of US geopolitics, which is a really big deal.

Evelyn: Yeah, totally. I mean, I would actually be shocked if there was such a letter from the DOJ, particularly in this moment when the platforms and the government are under such scrutiny about their relationships with the jawboning case coming up to the Supreme Court, and there's been reporting around how the government's really stepped back on its communications with platforms over a whole range of areas. And I would honestly be very concerned if there was such a letter as well, because I think that's an extremely aggressive interpretation of the material support statute when it comes to free accounts that I think would run into some First Amendment issues if that's what the government is telling companies that they need to do.

I mean, companies already take an extremely aggressive interpretation of the material support statute a lot of the time and remove, and we've seen to dramatic effect, and in my view, often pernicious effect in some of these areas where you see, for example, Arabic language content getting much more heavily censored just because the companies are risk averse and they don't have particularly good incentives to make sure that they are not taken over broad interpretation of this content.

So that would be extremely troubling. That's when it comes to free accounts and free, whether these people are just posting on the platform now, a much easier standard, a much easier question is when some of these groups, some of these people who are sanctioned or members of these various sanctioned groups are paying you for a particular service and paying you for an account to get extra amplification and a whole bunch of added features. Now, that one I think is a lot easier to answer, definitely under the material support laws, but I can't think of any platform that would be so irresponsible to be providing these paid services to sanction foreign terrorist organizations. Can you, Alex?

Alex: Gosh, is there anybody who would just take a credit card from Hamas, right? And take Hamas' American Express number and then give them some kind of amplification of their message. I wonder, is there a company out there that believes in free speech enough that they would do that?

Evelyn: I mean, particularly in this moment, it's hard to imagine a company exposing themselves to this kind of political, legal and reputational risk and blow back. Of course, our listeners at least won't be surprised to find that there has been a report out this week that X, the platform formerly known as Twitter, was doing exactly that. The Tech Transparency Project released a report showing that X had been providing its premium services, its paid premium services to accounts that include Hezbollah leaders, Houthi groups, and state run media outlets from Iran and Russia and giving them a blue check mark and promoting them into other people's feeds. Apparently X did remove the check marks once this report came out. So this doesn't appear to be a conscious stance in the name of free expression so much as complete and unsurprising negligence. But yes, there you go.

Alex: Yeah. And so they had the individual blue check marks, including Hassan Nasrallah, who's the Secretary General of Hezbollah. So we're not talking about some random dude with a little bit of Hezbollah ties, you're talking about the actual leader of Hezbollah who, it marked it as ID verified, so he uploaded his passport and said, nope, I'm really the head of Hezbollah. And then the organizational check marks, so those are a thousand dollars per month and Iran and PressTV from Iran and a sanctioned bank from Russia both got gold check marks, which that thousand dollars also gets you a thousand dollars in ad credit. And so one of the interesting question was PressTV, which is the official news source of the Islamic state, were they running ads on Twitter using those credits is not something that's actually in here, but is quite possible. But that's a thousand bucks a month, and that is a straight up, you're verifying the organization exists. So some human being looked at that at Twitter and said, yeah, no problem. Sounds good.

Evelyn: Yeah, yeah. I mean this legitimately could be a source of legal, I would certainly be very worried if I was, I mean, for many reasons I'd be worried if I was a lawyer at X right now, but this would be right up there with some of my biggest headaches, I got to say, because these are severe, severe penalties. Now, there is a knowledge [inaudible 00:34:43] standard, so you not only have to knowingly provide the services, but also have to know that they're a listed or sanctioned group. So I don't know what kind of evidence there will be on that particular question, but yes, I would not be sleeping particularly well.

Alex: Breaking news that the government of the United States of America has a problem with the Islamic Republic of Iran. That's only been a conflict since 1979, right? So it's cool.

Evelyn: Yeah, I mean, who knows who knew what? Let's just leave it there, but not best practices. If you're running it to a trusted safety team, let's say every other platform takes an overly aggressive interpretation of these laws, in my opinion, and shuts down way too much speech, 'cause they're so freaked out about finding that they might have liability, whereas here, you've got X just verifying IDs and providing these services, so good for them.

Alex: So a Meta issue, there's the direct OFAC sanctions possibly like punishment of DOJ of Twitter. The other thing that this brings up is that there's been a story after story of crazy personal issues including potential drug use involving Elon Musk who is not the CEO of Twitter, he's the CTO. Not in charge, not the guy who goes to the Senate, clearly, but a man who carries apparently a TS/SCI clearance, for anybody who's done that, that is a spectacularly difficult process to go through in which you have to talk about every kind of foreign entanglement you have. He does that because SpaceX is a huge federal contractor, including SpaceX has a division that launches classified payloads.

And I think, this is just another thing, if I was his personal lawyer, I would be really, really worried about the next time his clearance gets adjudicated when you've got things like you have a business that has been providing services directly to sanctioned Russian and Iranian entities on top of all of the personal stories that are now coming out that I'm guessing his ketamine use has not been documented in the modern equivalent of SF 86, which is called the e-QIP form. Yeah, this is going to be problematic for him, and that in the long run will be problematic for his ability to have a role at SpaceX because almost all of SpaceX's revenue comes from the US government.

Evelyn: Yeah. Given that we are in our Twitter corner, we should give the listeners obviously what they want, which is the sound effect to denote this segment.

Alex: Yeah, so here, this is going out to Elon Musk's clearance attorneys.

Evelyn: Yeah, they feel that deep in their soul. And to be clear, I mean this is again, totally unsurprising and symptomatic of just a general lack of due process or due diligence at the platform, you were talking about another story, just another story today, again, totally unsurprising about Chinese influence operations happening on the platform that was written up in the Washington Post earlier this morning.

Alex: Yeah, so the Washington Post had a good writeup of what a number of people had been talking about, but finally somebody kind of wrote about it. They quoted our colleague, Renee DiResta, who had also done a bunch of research into these groups that effectively, their post-2016, that we had a coalition that was built, that I was part of building, between tech companies of, if you spot foreign interference, you let the other tech companies know and you all take it down together. So, oh, look, this is the GRU or this is the Ministry of State Security, or this is a Macedonian troll farm that's financially motivated that you share those indicators with other companies and then they can make a determination themselves. Generally, companies will look and then see the exact same thing of like, oh, all these IPs are in China, or these are all sketchy VPNs and the credit cards coming from UAE, but they're pretending to be from Wyoming, that companies don't always agree as to exactly what they take down, but at least they would work together.

Well, that's completely broken down because Twitter has fired all of the people who used to go to those meetings. And so now you have Meta coming out with these reports of, we took down this Russian campaign, this Chinese campaign, this Iranian campaign, and that stuff is still up and running on Twitter often with blue check marks. And so there's a good Washington Post story effectively how these basically pretty much proven foreign influence campaigns, how effective they are on X, and that they're able to use verified accounts to continue to manipulate the public because Twitter doesn't have anybody looking. So yeah, I mean it's not shocking, but they've just basically given up. And if you're on Twitter, part of that is understanding that there's a significant amount of the conversation you're seeing might be manipulative, you being manipulated by state sponsored actors who have the money to create all these accounts and that there's nobody at Twitter minding the sore anymore.

Evelyn: Yeah, so that's another big change going into this election is if one of the, maybe even concerns that I had in past elections were these opaque relationships between these platforms working very closely together to police this content. Now we don't have that kind of industry-wide cooperation even when it could be beneficial and necessary in order to be effective. So we've seen the breakdown there. I think another big change has been the relationship around transparency. I think even four years ago there wasn't enough transparency, but there was generally this view that platforms which we're talking about transparency and trying to offer more and more transparency, and I think we're seeing the turn back against that. X is obviously, again, the real leader and pioneer in that space. But one story I wanted to highlight over the last few weeks that I think was worth picking up is TikTok also turning its back on some transparency measures.

Now, this is not complete. They also are rolling out election centers or whatever in the EU with certain kinds of limited transparency. But as a result of, we talked about this on this podcast, there were stories around the relative popularity of pro-Palestinian or pro-Israeli hashtags on the platform that generated a bunch of headlines and a bunch of reports around how the debate was slanted on TikTok that used pretty, well, we talked about it, very poor scientific method or data analysis where they just sort of picked a bunch of random hashtags, talked about how popular they were and cast dispersions that this was due to some sort of manipulation rather than doing any kind of comprehensive analysis or noting that, for example, these hashtags might be more popular because, for example, the user base is younger and those kinds of views are generally more shared by that demographic.

Regardless, TikTok obviously didn't like these stories and these headlines, and so as a result, they've shut down the tools that were available on its platform to see the popularity of various hashtags on the site. So they shut down the search functionality in its creative center and the ability to see the number of views on various hashtags to prevent the ability for people to write stories like this in the future. And I think this is just good work everyone, as a result of writing some pretty shoddy stories that were ultimately corrected, I think that there were follow-up stories that said, hey, we need to be more careful about this. It ultimately means that we lose access to what limited information we did have, which is I think a big loss for everyone.

Alex: Yeah, I really disliked those TikTok stories as we discussed, I agreed with you. But that's the reality of writing a platform, is that you're going to get criticized in smart ways, and dumb ways. And I think, unfortunately, TikTok's learning the wrong lessons. They're looking at what happened to Facebook and they're learning, one, we should not hire people internally that might leak documents. And so they've had leaks, but they also clearly have, they're very aggressive about controlling the internal conversation and what people are allowed to talk about.

And second is we should not have transparency because people use that transparency to criticize us and I think this is really unfortunate and it looks like it's going to work. I mean, TikTok obviously gets criticized plenty, but doing this had very little coverage. And so if this makes it less likely that people are going to point out, again, I think actual organic behavior, a lot of the content I saw on TikTok I did not agree with, but it's because those are young people that happen to not agree with my political positions, not because TikTok was pushing it and there's no evidence TikTok was pushing those things.

But it's important. It's important for us to be able to say, this is what young people are saying online. And it's important for TikTok to be able to, for independent research to have the evidence to say, we see no evidence of this being pushed. We see no manipulation. This is just the legitimate speech of what young people believe that is reflected in a bunch of different ways, including online. So yeah, I think it's a mistake in the long run, but probably in the short term it's a smart move by them, which is unfortunate.

Evelyn: Right, yeah. It's like are they learning the wrong lesson or are they learning exactly the right lesson of taking the YouTube approach of the less information, the less you say, the less information you give the less you get criticized. I agree with you that long term, this is going to undermine trust and legitimacy, and certainly it's the wrong lesson to learn if you care about the importance of providing insight into these. I couldn't agree with you more that whatever you think on these issues, it's good to know what people are saying. It's good to have transparency. It's good to actually understand the contours of political debate. If you believe that the best remedy for speech is counter speech, then you should want to know what the speech is. And TikTok's a famously difficult platform to get that kind of insight into anyway, and now it's just been even harder. So a big loss, and like you said, I barely saw very much coverage of this at all, and so worth pulling out.

Alex: Yeah, it'll be interesting to see what the DSA does to this again, just like with all these transparency discussions, DSA is where the game is at.

Evelyn: Right, yeah. Well, it's a good reminder. Happy DSA Day to all who celebrate because this weekend it comes into force for all the small platforms as well. It's been in force for the larger platforms for a while, but now all the little platforms also have to start complying. And as far as I can understand, no one really knows what this means or what's going to happen next, so it's going to be fun.

Alex: Yay, EU, yay.

Evelyn: Yeah.

Alex: Nobody knows what's going on. It's cool.

Evelyn: That's right. All right, and we'll end, a few quick updates from our legal corner. Thank you. Yeah, so there's, obviously not the busiest time of year in terms of legal news, and yet there still are a bunch of stories here. So we saw in the last week, Ohio is the latest state to have its parental consent law enjoin. We're just seeing sort of state after state learning this lesson the hard way that it turns out the First Amendment still says what it says, and Ohio joins Arkansas and California with having its social media laws enjoined. And our latest addition to the net choice restatement of the law is that Ohio's requirement for parental consent for children under 16 to get an account has been held up by the judge, as the judge said in that case, foreclosing minors under 16 from accessing all content on websites, absent affirmative parental consent is a breathtakingly blunt instrument for reducing social media's harm to children.

Now, do I expect that lawmakers will actually hear this and learn the lesson that they actually need to start thinking in a much more nuanced way about how to deal with the harms to children that are occurring through social media? I won't hold my breath, but yes, that's the latest addition to that chapter book. And in the meantime, the breaking news this week has been also the Kids Online Safety Act, which has had a bunch of amendments and now a whole bunch of new co-sponsors. So we are now up to 62 senators including Chuck Schumer and Ted Cruz, which looks like it may be on its way to pass through the Senate, although still no companion bill in the house. And so I struggle to understand exactly, especially in an election year, how imminent this bill is.

The latest amendments do address some of the biggest concerns that people had about this bill. So they remove, for example, the state AG. The primary concern was that state AGs had the power to enforce this law, and given what we know about state AGs and politics, that their interpretation of what material harmful to children might cover, there are obvious risks there. So that has been removed, which is good, but it takes it from being just like a breathtakingly scary bill to just a regular kind of bad bill in terms of still having this duty of care for platforms to take reasonable care to protect minors from certain kinds of content, which will have the obvious incentive of requiring platforms or making platforms be overly aggressive in terms of their removal of that kind of content, which to my mind is just an obvious First Amendment issue. I don't really understand how the lawmakers think it'll get around the First Amendment issues here. But anyway, that's the latest update, but we'll see what happens with that.

Alex: Right. I mean, there's the direct First Amendment issues and there's also the whole legislative record of these people talking about like, yay, I am looking forward to using this law to violate the First Amendment. My T-shirt, I want to suppress protected speech, is raising lots of questions about whether I'm going to suppress protected speech. The biggest question for me is like, for all this duty of care stuff, how are the platforms supposed to know that you're a kid? We continue, we're going to always come back to this problem of age verification. And I really, as a American, I do not want to end up in a situation where I have to show ID to create online accounts.

And so I would love to see the companies do some self-regulatory building, some frameworks here to communicate between the operating systems and the platforms on this, as I've talked about multiple times, because Congress does not seem to stop here. Like you said, I don't know if it's going to pass. The Report Act, as we've discussed, had a hundred votes in the Senate, had a unanimous consent, and has not been taken up so much in the house. So it doesn't seem clear that the coach is going to move anywhere, although it seems like it maybe will be a possibility if something happens that simultaneously ticks off both the right and the left.

Evelyn: Yeah. Well, the lawmakers have done at least enough First Amendment reading to know that they can't or shouldn't require age verification in the bill. And so all of the talking points around this are about how the bill doesn't strictly require age verification, but of course, if your bill is centered around responsibility to exercise care with respect to minors, what is the platform going to do? It's going to think that part of holding up that responsibility will be verifying who are minors on their platform.

And so at least it doesn't say it explicitly, but I don't know how much of a savior that will be when the inevitable result of this, the inevitable action that companies would need to be doing some sort of age verification. But anyway, we will see that it has been incredible to see the momentum around this, won't someone think of the children issue, which again, and we've talked about many times, legitimate issues here, legitimate harms, and it's great that we're talking about them, but I wish we weren't talking about them like this. And that's our big news update episode for the week. Any sports news that we should get out?

Alex: There was a small football game that Taylor Swift won as was preordained by the Pentagon. So if you had taken my advice and you would bet your life savings on the deep state, you would've more than doubled your money. So listen here first...

Evelyn: That's right, this is exactly why...

Alex: to deep state moderated content.

Evelyn: You listen to this podcast for betting tips like that about how to make money off the deep state. Yes, I saw that. Great game. I also, I guess, given that we are doing sports, I don't know anything about sports, but I am very excited for Caitlin Clark and her breaking the record for most points scored in the NCAA women's basketball. She seemed very cool, and it was an amazing shot. And so there you go, my contribution to the sports update segment finally.

Alex: It's very impressive.

Evelyn: Thank you, I think I got all of those words right, and mostly in the correct order, so we're good.

Alex: Those were all real words and those were all correct. Yes, Caitlin Clark is amazing. I wonder, is she going to play, I don't think Stanford's playing Iowa, so we're not going to get a chance, but I bet she's selling out opposing stadiums. She's become the Michael Jordan of women's basketball.

Evelyn: Yeah, I mean, I would absolutely go, but no time for follow-up questions about this segment at all, 'cause that would go in bad directions for me. And so with that, this is me.

Alex: Let's go to a Stanford game, and then we could do a report. Maybe we could do a live, maybe they'll let us bring our podcasting gear and we could do a live play by play. And you're like that tall woman's passed the ball to another tall woman.

Evelyn: They're all so tall.

Alex: And she seems [inaudible 00:51:11] they're all so tall.

Evelyn: Exactly. I'm sure there will be content moderation issues, always content moderation issues no matter where you go. So I'm sure we can find some trust and safety stuff to talk about for sure. Sounds like a plan. And with that, this has been your Moderated Content weekly update. The show is available in all the usual places, including Apple Podcasts and Spotify. Show notes and transcripts are available at law.stanford.edu/moderatedcontent. I always forget to ask, but I should ask this time, please give us a rating and review wherever you listen. It would be great to reach some new listeners. I know we have millions and millions of you, but we want to make sure that we get those poor, ignorant souls out there that don't somehow tune in every week. This episode was produced by the wonderful Brian Pelletier and special thanks also to Justin Foo and Rob Huffman. Talk to you next week.