Alex and Evelyn talk about OpenAI's first threat intel report, California's flurry of AI regulations, the latest on the TikTok ban bill, and a Downunder Special Segment.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:
TikTok Tick-Tock
Down Under
Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.
Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.
Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Evelyn Douek: So Alex, almost there, the finish line is in sight. What are your summer plans?
Alex Stamos: It is, yeah. Sanford ends next week with a big poster session for our class, and my kids are already done today so. Yeah, unfortunately my summer plans is mostly supporting the things my kids want to do. That's what happens when you have teenagers, is you're a driver/chaperone on airplanes. Taking one kid to a camp here, another kid to a camp here, and then taking our eldest to visit like 15 colleges in two weeks.
Evelyn Douek: Oh, I see. Oh, it's that summer for you?
Alex Stamos: It is that summer.
Evelyn Douek: Oh, exciting.
Alex Stamos: The summer of the same jokes from the tour guides, making jokes about parents and like, "Oh, no drugs happen here in this dorm, haha!." Yeah, it's really hilarious to go through as a parent.
Evelyn Douek: You'll have to bring them back to the pod and recount them for our listeners over the next few weeks.
Alex Stamos: Yeah, absolutely, absolutely. If you want to hear about what are the jokes at American University when you tour the dorms. Yes, I will know that.
Evelyn Douek: Yeah, branching out from college sports into college dorms. That's our market here.
Alex Stamos: Right. And we'll get back into college sports in the fall, I promise.
Evelyn Douek: Right, exactly. It's our summer programming.
Welcome to Moderated Content, stochastically released, slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Been a couple of weeks since we have checked in. I think the end of the quarter had us both under the pump, but we are excited to-
Alex Stamos: How many students did you have this quarter?
Evelyn Douek: Ouch.
Alex Stamos: No, I'm just wondering, how many students did you teach?
Evelyn Douek: 21, but you know...
Alex Stamos: 21?
Evelyn Douek: It's not so many.
Alex Stamos: Right, but that's 21 hungry little mouths. They're not okay with you just sending an email or posting a thing, right?
Evelyn Douek: Right.
Alex Stamos: That's a lot of office hours.
Evelyn Douek: Yeah, and it turns out the First Amendment platform regulation space is on fire, so it continues to be interesting. There's a lot going on and not even enough time to talk about it, but we're back and ready to have a hot pod summer, as they say.
Alex Stamos: Do they say that Evelyn?
Evelyn Douek: No one says that, no one. No one says that, but it's-
Alex Stamos: Let's not make that happen. Let's let this go. This could be the last use of hot pod summer.
Evelyn Douek: That's fair. I'm not attached to it. That's totally fine. So, kicking off this totally normal summer, the first item we want to talk to today is OpenAI has come out with its threat intel, its first threat intel, report. It was an interesting step, I hadn't been expecting this I don't think, but it totally makes sense. So this is a thing that's standard across the industry with these tech platforms. In the Content Moderation space we're very used to seeing it, with a lot of these major tech companies releasing these quarterly or biannual reports about the various influence operations that they've disrupted across their services, and so it makes sense that OpenAI would come into this space and do this transparency report as well. It's a totally voluntary measure, so welcome to see that they are taking that step.
The headline announcement is that for the past three months, they've disrupted five covert IO operations that sought to use their models. And essentially the thing that they really wanted to underline at several places in this report is that they're not seeing these operations be particularly effective. So this is a great result for OpenAI to come out with their first report, get ahead of the game and say, "We are looking for malicious uses of our services to do influence operations, information operations, but so far we're not seeing any massive leverage for bad actors using it." But I'm curious, Alex, what your big takeaways were?
Alex Stamos: Yeah, so a couple interesting points here. So a meta issue here is, haha, this was released at the same time as the Meta report. So OpenAI and Meta are clearly aligning here. That also used to be a standard. The two companies that used to work together on this all the time was Twitter and Facebook, and then Meta, where they would clearly work together over the quarter and then align the report. So that if they talked about a campaign, there would not be that kind of awkward situation where journalists are looking up, "Oh, you found these Russian accounts on Facebook. Let's look up the exact names and pieces of content on Twitter, oh, and there they are." Right? So the companies would work together to do the take-downs together, they would share data, and then they'd publish reports together. And that would allow groups like ours, at SIO, but then also journalists to write up, "Here's the big picture."
And here it was aligned. It should be interesting to note, we have not seen a Twitter report in over a year. This stuff is all still on Twitter. And so that is a fascinating thing that I think we need to... The stuff that OpenAI and Facebook is talking about here, you can find all over Twitter. They do give you enough descriptions to look for the same kind of stuff. And so the fact that Twitter is not disrupting these things should be noted. Because we have pretty strong attribution here, especially from the Meta side, I think a little more aggressive attribution than OpenAI, of state actors from Iran and China and Russia for example. So, it's interesting to see those being aligned, and they talked about some of the same campaigns-
Evelyn Douek: And when-
Alex Stamos: Sure, go ahead.
Evelyn Douek: And when you say aligned, I mean really aligned, I think they might even be using the same font in these two reports. They look very, very similar.
Alex Stamos: It's true.
Evelyn Douek: And it's again, not a coincidence I think. So I noticed I hadn't realized this move, but Ben Nimmo who was at Meta and before that was at Graphika, is now leading this team at OpenAI doing this work.
Alex Stamos: Yes.
Evelyn Douek: And so it's not a coincidence that all of this is looking very industry standard.
Alex Stamos: Yeah, it's clear OpenAI said, "Hey, we're going to cut the corner here," that Facebook had to go through years of hell of what you can read in these books, in which I get yelled at in conference rooms, it's really gripping stuff. I'm really sad that HBO has apparently killed the miniseries that was going to be all Alex getting his ass handed to him in corporate meetings.
Evelyn Douek: But you're over it, it's fine. You're fine. You're over it, you're fine. You're good.
Alex Stamos: Well, I don't think we've ever talked about this in the podcast, but there was supposed to be an HBO miniseries.
Evelyn Douek: Oh you weren't kidding? That was a real joke?
Alex Stamos: I'm not kidding.
Evelyn Douek: Oh wow.
Alex Stamos: It was a real thing. It was from the New York Times book, from Cecilia Kang and Sheera Frankel's book. They optioned it to HBO and they actually cast Claire Foy as-
Evelyn Douek: As you? That's amazing casting.
Alex Stamos: Right, talk about gender and racial blind casting. Claire Foy plays Alex Stamos. No, Claire Foy was going to play Sheryl Sandberg. I had not been cast yet according to IMDB Pro, but I was definitely expecting to be like Zach Galifianakis, right? So he's got like Cheetos in his beard while incredibly beautiful Claire Foy tells him that he's wrong about everything. That would've been fascinating. That would've been a wonderful thing for my grandchildren and great-grandchildren.
Evelyn Douek: I mean I would watch it for sure.
Alex Stamos: That would be awesome.
Evelyn Douek: Let's be honest.
Alex Stamos: So unfortunately the miniseries has been killed, so you can't see what would've been the completely accurate retelling of exactly how Facebook got to the point of where these quarterly reports were happening. It's clearly OpenAI looked at all the pain Facebook went through and decided, "Hey, we're not going to do that. Let's just cut the corner." And so they went and they picked up Ben, and Ben is building a team, and like you said, it is becoming somewhat of a standard. So long story short, one, it's interesting that they're aligning, two, they're talking about the same campaign. So, you get to see from OpenAI's perspective, their tools being used by these actors, and then from Facebook's perspective, the downstream catching the content. So that is fascinating I think for those of us who have been watching and waiting for GenAI to be used.
Both OpenAI and Facebook basically agree that GenAI is not a big deal here. Now, that makes sense coming from OpenAI, but Facebook basically clarified like, "Hey, GenAI has not changed these things significantly." They specifically say, "So far we've not seen novel GenAI-driven tactics that would impede our ability to disrupt the adversarial networks behind them." They do see photo and image creation, AI-generated video and newsreaders, text generation, but for the most part not photorealistic. And the use of Gen-generated profiles and such, they basically hint that, "Hey, we've figured out how to detect those." And it has been pretty clear that Facebook knows how to do some of that detection, and that's probably being a pretty good signal that they're using.
As for the actual operations there's a couple that I think are really interesting here. One is they both give an update on Doppelganger. Doppelganger is the most aggressive of the Russian campaigns at this moment. It looks like this is a group of private companies that are working directly for the Russian presidential administration. So no longer via Yevgeny Prigozhin, God rest his soul. The poor man died in a tragic, totally random private jet accident after challenging Vladimir Putin. Incredibly sad. We should probably take a moment for everybody, every oligarch, who has fallen out of a window or had their private jet go down in Russia. Okay, that was enough.
Anyway, Yevgeny Prigozhin, totally cool dude. So obviously he's not running the IRA anymore. So, the theory is that effectively they have consolidated a bunch of the propaganda campaigns to not be so indirect, and to be under direct Russian Kremlin control. And so Doppelganger has been very aggressive, very large. They've been doing lots of fake accounts, lots of fake news outlets. So they create fake news outlets in different places. And one of the interesting things Meta said is that they're seeing that Doppelganger, which has been operating since 2022, is much less active on Meta than before, but they're still active overall. And this is basically a wink, wink, nod nod to like, "Hey, why don't you go look at Twitter?"
So the focus for Russia is really trying to drive division in NATO and trying to eliminate support for Ukraine in the United States, and if you go... This is not a scientific position, and we have criticized people who have made... But it's very hard to come up with a scientific position of what's happening on Twitter now, because they've cut off API access and they're suing anybody who tries to subvert the lack of API access. But if you go right now and you talk about Ukraine, or you talk about Taiwan, you will be completely inundated and flooded with a combination of people gently pushing back to you, all the way to sending you death threats. And I expect that on the Ukraine side, a bunch of that is Doppelganger, and the Taiwan stuff is Chinese groups doing the same kind of thing. So anyway, they're saying Doppelganger is still going, and that was interesting.
The other interesting thing that I'm surprised didn't get more play because it's so politically valiant, is both OpenAI and Facebook caught an Israeli firm. In OpenAI's case, they actually named the company, a company called STOIC, which then they use for fun of they name the Operation Zero Zeno. They're like, "Oh, look, we've taken philosophy classes. Look at us." But it is an Israeli company, it's not clear who's paying them, but they're obviously acting in what they believe to be the best interest of Israel in their conflict with Hamas and they're spreading a bunch of propaganda about Hamas, about Palestinians and the such, and were doing so using OpenAI to generate some of the content. And there have been influence operations coming out of Israel before, but this is the first one that we've really seen tied directly to the post-October 7th conflict, and that is being called out as very explicitly using OpenAI.
Evelyn Douek: Yeah. It's interesting that you say it didn't get a lot of press coverage and it really is a sight of the times. I remember even a couple of years ago, every single time these companies released these reports, there was the stock standard Washington Post, New York Times article about what they'd found, what they'd taken down. There was just so much coverage of influence operations at the time, and I think that's just not as salient, or it doesn't get covered as much anymore. There was a little bit of press coverage of OpenAI's report, but you're right, it didn't pick up on this particular issue as well.
But I'm sure that'll change in the lead up to the election over the course of the coming year. And yeah, clever of OpenAI to get out ahead. Obviously not totally surprising that their first report says, "Hey, no need to freak out. Everything's fine. We haven't seen anything very scary." And there's not a lot of transparency, there's not a lot of way to verify what they're missing, or how hard they're looking and all of those sorts of things, and that is a perpetual problem with this industry and with these reports. But yes, a welcome step from them.
Alex Stamos: Well, and I think it's smart, because OpenAI is in a very different position than Facebook. Facebook, is successful and the largest social media company, has to do this because they have the audience. So if you're an influence campaign, you can go to Twitter, you can go to TikTok, but you're eventually going to come back to Facebook, you're going to come back to a Meta product. For OpenAI, they don't have a monopoly on this, right? They are a tool that you use to create content, that then you use on META'S properties. And so I think it's actually very smart from OpenAI, because they don't have to actually defeat these guys completely. If they throw an elbow like this and they're like, "Hey, if you use our products, it is quite possible that we will publicly name and shame you." Then it makes it much more likely that those... You can just go down the street to Anthropic, or any other commercial tool or more likely, as we've discussed multiple times, to open source.
And there's really nothing I'm seeing here that these folks did, on OpenAI, that they could not do if they invested a little bit in applying to open source. Which then puts Meta in a funny position because the most likely open source models to be used to manipulate Facebook, are actually shipped by Facebook. That is kind of a funny place to be. But I think OpenAI here, we might see future reports be a lot thinner, because there's no reason to keep on coming back. If you're Doppelganger and you're like, "Oh, crap, they caught us. And this is a big downside, because if they catch us, they're going to share this content with the FBI, they're going to share it with the IC, they're going to go share it with the platforms, and get it banned off all the platforms," then you're going to move to Stable Diffusion, to Llama, to everything that you can run locally.
Evelyn Douek: Yeah. The other thing I noticed reading this report was I remembered a long time ago you said that one of the biggest capacities that you thought generative AI would have to influence operations was different languages. It's not necessarily that it's going to get more convincing, and there's a cap at which extra volume is really impactful, but the idea that these operations will be able to use these tools to generate content in different languages, to target different regions, was something that you picked up. And that's exactly what we see here, that there's lots of different languages that these operations are using these tools to generate.
Alex Stamos: Yeah, it's very helpful for Bangladesh, for example.
Evelyn Douek: Right.
Alex Stamos: Right. You're not Russia, it's very hard to staff a ton of people who are great English speakers, but generative AI makes it a lot easier.
Evelyn Douek: So with all of this new technology, something that you've been keeping an eye on, that has been a little bit below my radar recently, is the flurry of regulations for artificial intelligence, in particular in California over the last couple of months. So, what are you seeing there?
Alex Stamos: Yeah, so by one count there's been over 50 bills discussed in the California legislature. There's five that seem actually pretty important and that have moved forward or currently look like they're going to move forward. A couple of these I think are not that controversial. As we've discussed, I think the best way to regulate AI is to regulate its use, and they have some great ones. Already, I think we have laws that cover most of these things, right? Housing discrimination, racial discrimination, and hiring and such. A couple of these laws are just making it more clear that if you use an automated system, you are responsible for it. You cannot use that as a defense of like, "Oh, the computer was racist, not me, in hiring people. So, I think that's smart, and specifically around employment, totally fine.
There's an interesting one about digital replicas in the entertainment industry, which is key for California. California having such a huge entertainment industry in LA, that the question of the balance of power between the unions and the producers and studios, and what are you going to allow? We've had all of these strikes already in California, specifically around the use of AI. I don't have the background here, but I do think it is very interesting that these bills are, they look like they're pretty pro-worker, trying to create more rights, even without union action, for people who work in the entertainment industry to not have their likenesses and such used by AI.
The one that is the most controversial and I think is the scariest, is one that's actually passed out of the state senate, it's called SB-1047, it's entitled "Requiring Safety Standards for Large AI Models". This is along the lines of what we've already seen from both Europe and its AI regulations, and the Biden administration's EO, except it goes a lot further, and then it's only on a state level, which I think creates a lot of issues. So, basically what it does is it creates new agency under the California Department of Technology to help come up with standards, specifically talks about them working with NIST. I think this is already pretty problematic. The idea that California can create tech standards that are as influential as the U.S. National Institute of Standard Technology. NIST is already not doing a super great job here, and they've got incredible resources. The idea that every state can do NIST-like work is just foolish. It's just going to be almost impossible to staff that team appropriately.
And then, it basically creates a responsibility that if you create a model, that's of a certain size, and it's taken the standard from the Biden EO which was... The Biden EO, as a reminder, was very Nat Sec-focused. It was basically like, "Hey, if you're making these big models, do not ship them to China." It doesn't say that, but that's effectively what they're hinting at. And so they're using the 10 to the 26 as the standard, as the number of operations that have to be done in training, which is a reasonably futuristic standard that I think some of the biggest models have already hit. I'll have to double check that. People can write in and correct me if they want on my exponential notation checks.
But it is a standard, it is certainly a number that if it is not commonplace now that we'll be hitting pretty soon for the foundational models. And it says, "If you have a covert model, then you have to effectively do proactive work to prevent hazardous capabilities." And it's got a couple of specific hazardous capabilities here, "The creation or use of a chemical, biological, radiological, or nuclear weapon." Okay, so if you're building a model-
Evelyn Douek: In a manner, not just any, but in a manner that results in mass casualties, so it's-
Alex Stamos: Right.
Evelyn Douek: The really bad ones.
Alex Stamos: Yeah.
Evelyn Douek: If your nuclear weapon only results in a few casualties, you're fine, but the mass casualty events are the ones you want to watch out for, yeah.
Alex Stamos: Yeah. So you use AI to generate a biological weapon that only kills 10 people, like if it's super specific.
Evelyn Douek: Right, go for it.
Alex Stamos: But if it's a general biological weapon, you're in trouble. But then, it is got big ones like, "If you use this model to cause at least $500 million in damages through cyber attacks on infrastructure," or "$500 million in damages if a model autonomously engages in conduct that would be criminal if done by a human." Which is everything. Effectively, any crime that is done online, you could theoretically use a model for. And then four, of course, because this wasn't broad enough, "Other comparably severe threats to public safety." So instead of the specific ones where they're saying, "Hey, if you use a model to discriminate, that's a problem." This is like if you create a model that has any possible use, in any of these incredibly broad categories, that it's now regulated.
Evelyn Douek: Yeah, and then there's one step further, even in paragraph two of this definition, which is it's not just that. It also includes any capacity, even if that capability would not manifest, except for a third party further fine-tuning and post-training modifications of the tool. So, even if your tool can't do it yet, but some third party could fine tune it, or train it further to make it have these hazardous capabilities, then you're still liable, which is another step further of liability as well.
Alex Stamos: Right. And so if your model has any possibility, which effectively any foundational model is going to fit this category, because you can come up with a theory, a theoretical situation, in which it can be used. Clearly what they're trying to do is catch like, "Hey, if you're building Skynet, you've got to be careful." But this is way beyond Skynet. Anything that generates language can be used in crimes, and will be, probably, realistically. And so it creates then all of these requirements in which you have to have "administrative, technical and physical security protections to prevent unauthorized access to misuse or unsafe modification of the covert model." Which basically makes open source impossible. You have to have it under controls. And really, an open source model that can't be modified, is not something that we know how to create right now. It's not computationally feasible. That is beyond the current computer science literature.
Requirements after requirements, a bunch of things that are effectively audits and paperwork, which is along the lines of what the Europeans are doing, but they're asking for a bunch of things that are impossible. And something that is possible, but clearly puts, what people are thinking about here, that you have to have a kill switch. I understand where these people are coming from, but it's clear that this is the effective altruist/the total AI doomers. This is not about the realistic risk that we just talked about, that OpenAI and Meta are talking about, this is not about cars running over people, or racial gerrymandering, or negative uses, it's not about job displacement. This is about the idea that if you make a smart AI, it's going to kill all humans. And one, I don't think it actually solves that problem, other than the kill switch thing, which is just kind of like...
Evelyn Douek: It sounds cool.
Alex Stamos: Okay. It sounds cool.
Evelyn Douek: Does it have to be big and red?
Alex Stamos: Right. Yeah, they don't say what kind of sign has to be over it. Yeah, yeah, exactly. But it's not focused on realistic... The risk people are actually facing from AI. And it's so overbearing that I think it would be incredibly damaging to California's economy, in that California is the leading location in the world. The United States is way in advanced on AI, and California has three of the biggest players. OpenAI, Google and Meta, are three of the four, five biggest players. And we have Anthropic here too. So really only one we're missing is Microsoft, and half of Microsoft's work is being done by OpenAI. Apple's going to announce a bunch of stuff in the next couple of weeks, it looks like it's all coming from OpenAI. So, we are the center of the world on this, and this bill could totally invert that.
Because if I was Sam Altman and I looked at this, I'd be like, "Okay, I'm just moving. It's time to go and pay fewer taxes somewhere else. I understand the weather will be suckier." Because the requirements here, some of them are just not possible to live up to. And instead of dealing with the regulatory overhead, I think that companies will just leave. How will Facebook handle it? It'd be almost impossible for Facebook to leave California, it's a much larger company and has a ton of people here, but it is quite possible that they would move their... A lot of their AI stuff already happens in Paris, funny enough, and so they might use the less regulated European area, to do a lot of their AI work.
Evelyn Douek: Famously below-regulated Europe, yeah.
Alex Stamos: Yeah, and then do things like Llama cannot be downloaded by Californians, for example, which would be, I think, unfortunate. I understand where these people are coming from, but I think this bill is misguided, and I think it's going to cause real damage to California's economy, and probably not actually fix anything. It's also clearly a kind of thing you should do at the federal level. If you're going to have these rules, it should be at the federal level and informed by multinational... When you're talking about existential risks, then what people are doing in China really matters. And so unless we have some kind of global standards here of the use of AI for weaponry, of hooking up AI to nuclear power plants, those kinds of things, then it doesn't matter really if California leads the way.
Evelyn Douek: So like I said, I haven't been following this closely. Do you know if there's a decent chance... Does this realistically have any chance of passing?
Alex Stamos: Well, it's passed out of the Senate already. It has to go through the assembly. The saving grace here is that yesterday Gavin Newsom actually said, "We got to be really careful with AI regulations." I think he's hearing the exact same message quietly from people up and down the tech stack. So, he's basically hinting that he will veto any bill that's overly onerous. And so I expect that what we'll see is four or five of these bills will come out and he will sign the, "You can't steal people's faces, you can't do bad hiring," that kind of stuff and might veto this bill.
Evelyn Douek: Yeah, it'll be interesting to see.
Alex Stamos: If not, then there could be a very quick movement out and then you could see the governor trying to undo it very quickly. Although there's no retroactive veto. As a Californian, as a California taxpayer, as a native Californian, I'd rather we not lose that entire industry.
Evelyn Douek: Okay, well, something to watch. Speaking of over-broad and poorly thought out bills, last time we did the podcast, we talked at length about the TikTok ban bill, or [inaudible 00:24:00] as again, no one likes to call it, but I will continue to do so. And so there's been a few...
Alex Stamos: Is that like a cuss word in Australia?
Evelyn Douek: Yeah [inaudible 00:24:09]. Yeah, it's very rude.
Alex Stamos: Hey!
Evelyn Douek: My people are very offended.
Alex Stamos: Oy, [inaudible 00:24:15].
Evelyn Douek: It's cultural appropriation, Alex. Come on, take a step back.
Alex Stamos: Sorry Sheila.
Evelyn Douek: Yeah right, thanks. So there have been a couple of developments since we last talked about it, which is that in completely unsurprising news, both the company and a group of creators have challenged the bill as unconstitutional. Since then, the parties have agreed on and the court approved an extremely expedited timeline for the case. So there's going to be arguments in the DC circuit in this case in September, which is crazy fast. And so there goes a whole bunch of people's summer holiday plans I think. So rest in peace that trip to Greece I think, for some of the lawyers and associates out there. But it'll be an exciting few months, and the common assumption-
Alex Stamos: I feel like, in the Zoom era, they could figure out a way to bill $2000 an hour from Mykonos. I think that's totally possible.
Evelyn Douek: I thought you were going to say in the Zoom era, they could figure out a way to enjoy Greece from their desk. But you're right, your way is better. And the common assumption is this is on the fast track to the Supreme Court, so by maybe the end of this year or early next year, this is probably likely to end up at SCOTUS. There's nothing all that-
Alex Stamos: So what does the court have to do to make it reviewable? Is it going to have to be... I mean, because you not going to have the full trial or whatever, it's going to be some preliminary injunction or something like that?
Evelyn Douek: Yeah, well this is one of the most interesting aspects of the case. You're absolutely right, this is going straight to the DC Circuit. There's not going to be a trial, court decision, and so they're going to have to decide on the constitutionality of this bill. They're going to have to engage in some sort of evidence, or fact-finding process, but it's a little unclear what that's going to look like given you have all sorts of challenges, it's not a trial court. And also you have the challenges of some of the information here, you would imagine some of the evidence is going to be classified, or that the government might want to introduce classified information if it has something to substantiate its very broad claims that this app is a national security threat. So one of the big unknowns is how they're going to deal with that.
But there's a lot of really fact-intensive questions here that need to be sorted out before it gets to the Supreme Court, including, obviously, the national security claims, what is the risk? But also why was Project Texas abandoned is a massive factual dispute in this case. So this is a scheme that TikTok and CFIUS, and the US government, were engaged in to have a whole bunch of the data hosted in Texas and supervised in various ways, and included a provision, TikTok said in its petition, of "a shutdown option if it was found that they were breaching the terms of this arrangement," which is pretty extreme. And TikTok says, "We think that this would've satisfied all of the concerns, and we don't know why the government stopped pursuing this." And so the question of whether Project Texas would have addressed the national security concerns is going to be a huge factual issue. I imagine the government's going to say, "Look, we tried. It wouldn't address all of our concerns adequately."
And the court's going to, I think, potentially, come to some sort of view on that, unless they decide that it doesn't matter, it's facially unconstitutional regardless. But there's that question. And then there's also the question of whether the app can be sold at all. So TikTok in its petition argues that "there's no commercial, technological, or legal way to divest", to have ByteDance divest. And that's again, a factual issue. And so it says this is effectively a ban, because there is no divestment option, and that again, changes the constitutional analysis. So there's all of these factual issues that the DC Circuit will have to come to a view on potentially, unless they decide that on its face, regardless of the facts, this is an unconstitutional bill and it's unclear how they're going to go about that.
Alex Stamos: And that's crazy. It's not like the DC circuit pulls in experts and has witnesses sit for testimony. How does this even work?
Evelyn Douek: So I've been speaking to some of the lawyers on this, they're not sure. They may appoint a special master. It's not exactly clear how they're going to handle this. They may just accept evidence under seal for the classified information. They haven't set down the exact date, or the exact procedures for how this is going to proceed. So that's one of the big unknowns here.
Alex Stamos: It's fascinating.
Evelyn Douek: And it's all, again, going to be happening on a lightning fast timeline. These are really difficult issues, really intense factual disputes, and it's going to have to all be decided pretty quickly. Meanwhile, TikTok argues the law is content-based, viewpoint-based, speaker-based, and unlawful prior restraint, based on mere speculative harms, both over and under inclusive, which is known in sophisticated legal terms as throwing everything but the first amendment kitchen sink at the case. It's just saying this is like, "Bite me."
Alex Stamos: Is this in fact, people's religion. Does it establish a religion?
Evelyn Douek: Right, exactly.
Alex Stamos: Is that there?
Evelyn Douek: Yeah, probably, that'll be in their amended petition, when they file it next month. So yes, that's TikTok's claim. My other favorite thing is the creator complaint, is the plaintiff section where you have all of your model litigants. So they've gone and found eight upstanding TikTok creators, including a first-generation rancher from Texas whose success on TikTok indirectly paid for the adoption process of his son. You've got a BookTok influencer, a college football coach, a sexual assault survivor, a single mother whose-
Alex Stamos: Oh my god.
Evelyn Douek: Success on TikTok allowed her to fulfill her lifelong dream of operating a cookie company, which is the compelling legal argument of why does Congress hate cookies? I mean, come on. It's just cookies for Americans.
Alex Stamos: And ranchers.
Evelyn Douek: Yeah, and ranchers. And then one of my favorites is Christopher Townsend, who served in the US Air Force for six years as a language analyst and is now a well-known hip-hop artist who founded an organization dedicated to promoting biblical literacy. So that's, yeah-
Alex Stamos: Wow.
Evelyn Douek: You can tell who that plaintiff is targeting on the court.
Alex Stamos: Hey, ChatGPT, can you generate one plaintiff?
Evelyn Douek: Exactly Alex, this was my thought reading it. So I noticed there's only eight plaintiffs, and this is just a rookie error, because clearly you need nine, one for each justice.
Alex Stamos: Right, you have one who's like a liberal centrist woman who was the president of a major public university. You have somebody from Puerto Rico who is very proud to be the first person. Yes, you've got the religious, the deeply Catholic person who's using it. How do they not have a nun. I feel like that really, if you want to lock up the Alito, you've got to have a nun or a priest.
Evelyn Douek: Just rookie lawyering here. I can't believe it. They're just throwing away this case.
Alex Stamos: It's a fascinating question of how did they data mine. The lawyers working with TikTok were clearly data mining their content of looking through... They've probably been doing it for months. The question is like, did the lawyers just click through TikTok, or do they go into the back end and actually use the algorithm to try to like, "Okay, great. We need somebody who reads conservative, who loves Bibles, uses the word Jesus a lot in their TikToks." And they're like, "Okay, great, we've narrowed it down to 10,000 people." And then you have an associate review those 10 thousands videos, and then you have to do a background check of effectively all of them. You have to watch all of their content to make sure it's like, "Oh no, they're a milkshake product. It turns out they have this one Nazi video from two years ago,
Evelyn Douek: Or have said anything ever about the Supreme Court, for example, which is something that Americans do from time to time, so yes. I think I read somewhere that they were just, a lot of these people are pretty... They have lots of followers. They're very prominent on TikTok, and some of them have talked about how the ban affects them in their TikTok content,. So I think that that's largely how they came about. But yes, I really hope that the cookie company doesn't go under as a result of this litigation, but we'll have to wait and see. The briefs will be filed over the summer and set down some time in September for oral argument.
Alex Stamos: And I remember going through this at Facebook when we were banned by the People's Republic of China of years of litigation and due process for American platforms operating and trying to bring freedom of speech to the Chinese people. So it is great to see the same system operate here.
Evelyn Douek: You're right. It will be a true sign of America's freedom and American greatness when this bill is struck down as unconstitutional, because it infringes on First Amendment freedoms, and that's exactly why this country is so great. So it's an excellent point.
Meanwhile, the political pressure on TikTok continues. So listeners of this show will be well aware of the trade association NetChoice, which is the body that represents a bunch of these tech companies in the litigation around the country, including the two big First Amendment claims at the Supreme Court this term about Texas and Florida legislation. And this month the group kicked out TikTok, which had been a member since 2019, and Politico had some excellent reporting on this that the reason why they did so was because Republican House Majority leader, Steve Scalise, basically put tons of pressure on them to do so and told NetChoice that the House Select Committee on China would be investigating groups associated with TikTok if they didn't kick them out.
Now that is blatant jawboning. It is the use of government power to infringe the associational rights of TikTok and NetChoice, and it's a pretty appalling thing to see I think, pretty McCarthy-esque kind of Red Scare stuff. And a good reminder that while NetChoice is often the protagonist in the stories that we're telling here about fighting back against unconstitutional legislation that's harming users' rights and standing up for the First Amendment, at the end of the day, they're a lobbying organization that wants to please politicians and their interests are not users' interests directly. And that's a kind of wild fact to keep in mind when we think that this is the organization that's responsible for litigating many of the cases that are at the front lines of First Amendment today, so just to sort of highlight that.
Alex Stamos: For all the discussion of the weaponisation of government of jawboning. The biggest jawboners in the world is the United States Congress that is completely bipartisan and they continuously use their powers to punish people for their speech, including myself and my students. They threaten you if you say things that they don't like, they will use their powers to cost you millions of dollars in legal fees and try to punish you by going through all of your email and leaking the stuff, that's out of context, to Substackers. It's just ridiculous to me of all discussion of jawboning that we don't do something about Congress. There really needs to be punishment for members of Congress who abuse their power in this way.
Evelyn Douek: Right, I completely agree, it's completely outrageous. After yesterday, TikTok and other victims of jawboning may have a better legal claim, so we are still waiting on all of the big platform cases to come out of the court this term. But yesterday the court did hand down another jawboning case, which is NRA v. Vullo, which upheld a first Amendment... Or suggested that the NRA had pled that Vullo had likely infringed the NRA's constitutional rights by going after insurance companies and telling insurance companies to cut off their relationships and business associations with the NRA on the basis that these New York politicians didn't like the NRA's political position and political speech. A lot of the case hinges on the fact that Commissioner Vullo did have direct regulatory authority over these insurance companies. And so I don't know how that maps on exactly to people in Congress who don't have that kind of one-on-one direct regulatory authority and enforcement capacity.
But the other really interesting thing to note is that Murthy, V. Missouri, the platform case, and Vullo were argued on the same day, and Vullo came out yesterday and Murthy didn't. So we're sort of all reading the tea leaves trying to work out what this suggests about Murthy. Now, Vullo suggests quite a broad reading of jawboning claims and suggests that there is some capacity for these kinds of First Amendment claims to be successful. On the other hand, I think it suggests that there's a big fight going on in the court over Murthy v. Missouri, and there's still some maybe long and angry dissents being written, and it's hard to work out where it's going to come out. So very interesting, and we are still on tenterhooks for the platform cases this term.
Alex Stamos: Yeah. So the Vullo being unanimous, and it seemed to be just a straight application of the Bantam standard. It kind of amazed me this case made to the Supreme Court or they took it, because it's very short, the decision. They're like, "Here's the standard." You can't threaten to do this and then use your actual power. And in this case, there was a significant... Unlike the other case, Murthy, where the problem for Murthy v. Missouri side is that their fact pattern sucks. As Sotomayor pointed out, they're totally wrong on a bunch of things. They made a bunch of factual misstatements in their filings. In this case, it's super clear. Nobody... The New York side does not claim like, "Oh, we never made this threat." It's pretty explicit. And then if you look, the other part is the fact pattern that Lloyd's of London had all of this email of, "We got specifically threatened. They said they would use this power. So we are making this decision." And it's clear cut in a way that Murphy is not.
Evelyn Douek: Yeah. So this was on a preliminary injunction, so it's now going to go back down and there's going to have to be a whole bunch of fact-finding in terms of how significant were the threats. But a lot of these were public statements and the court sort of had to accept, based on the position, the NRA's version of facts. And so there's obviously, I think, the New York politicians are going to contest that at trial. But you're right, this is I think one of the easier cases.
The question was has the law changed? Bantam Books is 60 years old now and has this sweeping, very functionalist, realistic understanding of how government power works, that doesn't really look at specific legal threats, or really looks behind the form to the substance of what the government actor was doing. And that isn't necessarily how the court has always thought about these things in later years, and so it was great to see the court reaffirm that it's going to engage in that kind of multifactorial analysis to really look at what's going on, what's the government actor doing, and how would the reasonable person have interpreted what the government actor was doing in terms of whether it was a threat.
Alex Stamos: Okay. Well, we are all waiting.
Evelyn Douek: Yeah, that's it.
Alex Stamos: When's the term end? How much longer are we-
Evelyn Douek: June. So it should all come out in the next month. Every week in the last few weeks I've been promising my students it'll come out so that we can talk about it next week, and it never came out. So it's obviously going to come out now that my course is over and I can't talk about it with my students anymore. So they were just waiting to troll me.
Alex Stamos: They'll have to take your special seminar, things that happened over the summer that you can teach in the fall.
Evelyn Douek: If I can't have Hot Pod Summer, I'll have Hot Seminar Summer, I don't know.
Alex Stamos: Hot Seminar Fall.
Evelyn Douek: We'll workshop it. We'll workshop it.
Alex Stamos: Please, listeners write in to Evelyn and give her ideas of what she can call her hot seminar on First Amendment law that happened in June.
Evelyn Douek: Right.
Alex Stamos: Just in June.
Evelyn Douek: Like any good doctoral subject, it's going to be what happened to the First Amendment in June 2024, which actually legitimately could be many doctoral dissertations worth I am sure. Okay, so one quick Down Under special for the week. My ties to Australia weaken along with my accent with every passing day. But I do feel like--
Alex Stamos: Evelyn really just needs to strengthen here to a place where I can't even understand you. You need to turn it up as much as possible.
Evelyn Douek: Can I still do that? I don't even know. So when Australia's in the news and I feel like it's squarely in our wheelhouse, I feel like we've got an obligation to cover it. So mate, what's been going on Down Under? Well, there's been an ongoing legal battle. I'm going to switch back just in case literally no one can understand me from this point on.
Alex Stamos: No, that was very understandable.
Evelyn Douek: Okay, great.
Alex Stamos: What's the Australian version of Cockney when you guys are making fun of yourselves?
Evelyn Douek: Ocker.
Alex Stamos: Is there... Ocker?
Evelyn Douek: Yeah, there's a real ocker accent. Yeah, it's onomatopoeia I think, when someone says ocker you understand what they mean,
Alex Stamos: Right. It's like Texas for us. When people make fun of Americans they're like, "Howdie partner." They put their fingers in their belt loops and their-
Evelyn Douek: "Good day mate, pop another shrimp on the barbie," is what people will say to me when they hear I'm from Australia.
Alex Stamos: [inaudible 00:40:37].
Evelyn Douek: Never gets old. Yeah, just never gets old.
Alex Stamos: That's a knife. This is a knife.
Evelyn Douek: I've lost him. Let's mute him. Let's mute the mic, because he's gone. He's just going to be making these jokes for the rest of the episode.
All right, so what has been going on in Australia? There's been a legal battle in Australia over whether the eSafety Commissioner could force X to take down a video globally. So this has been happening over the last couple of months. There was a horrific stabbing of a bishop while the bishop was doing a live stream service in Sydney in April, and X was ordered by the eSafety Commissioner to remove 65 tweets containing the video of the stabbing attack, and to not only hide it from Australian viewers, but to hide it, to take them down, globally. And X ignored this order. They did remove it for Australian viewers but didn't remove it for global audiences. And then ultimately they went to court and the federal court rejected eSafety's bid to uphold the global injunction.
So two things to say about this. First, 99 out of a hundred times I will choose Australia over Elon Musk. But sadly, I do think that he had the better argument here, which is that when countries start to enforce their orders for take-downs globally, we start a bad race to the bottom in terms of freedom of expression. And you can see where this leads, where countries are just trying to extra-territorially enforce their own standards on freedom of expression. That is not a place that I want to go, especially when... Australia is not the worst when it comes to free speech, but there are other countries who I would not want to see their standards being globally enforced.
Alex Stamos: The last country that went this far was Austria.
Evelyn Douek: So Austria did try to do that for a defamation action. And Canada has also done the same. And this all happened... There was a bunch of litigation, a flurry of litigation about this, a number of years ago, and then it kind of all went silent. I think that countries just sort of started not trying to extra-territorially enforce as much, and then I guess Australia thought, "Why don't we have another shot at it?" So there you go.
Alex Stamos: Yeah, it's interesting because... We're seeing it now because it's free countries, it's democracies that are doing it, and so it's all in the court system. Autocracies have done this for decades. So again, I agree with you. I have to agree with Musk here. The thing that pisses me off about Musk is in all these situations, he thinks that this is the first time it's ever happened, that he's the guy who's invented the stand.
Evelyn Douek: Right.
Alex Stamos: No, all of the companies, including Twitter, have fought against global injunctions for 20 years because it is a complete disaster. Google is probably the first one, maybe even Yahoo back in the indexing days, those are the first ones to get global injunctions from countries if you have to take links down to content. And there's a bunch of, I think, German cases around Google search and Nazi memorabilia and that kind of stuff. eBay had a bunch of these, so this is nothing new. He has the same exact position. He's turning himself into this martyr.
But just to remind people, Elon Musk is not a free speech advocate. He is trying to use this example as saying he's protecting free speech and he is in this case, but doing nothing outside of the realm of what every other member of NetChoice honestly has done traditionally for decades, while he still does all of these other things to suppress the speech of individuals, including using the Computer Fraud and Abuse Act to try to suppress the speech and punish the speech of people who are critical of him, and taking down arbitrary stuff, banning people who use the word cis on Twitter, just stupid little silly things on top of really important things like abusing the CFAA to punish critics.
This thing's gotten all this play because Musk is turning himself into a martyr, but if you go over to Meta's legal team, they're like, "Yeah, this is a Tuesday." Like, "Oh look, we've got a global injunction. Now we have to fight it. We will fight it as far as possible within the legal system of this democracy before we just say no." And then it gets to the punishment phase, and then there's almost never any punishment. The great example of this was Brazil, which had some equivalent stuff, and they actually ended up jailing the head of... I personally had a trip, I was just due to Brazil when I was at Facebook and it got canceled because we were in conflict with them, and I'm very glad I did it because they ended up jailing the quote unquote, "head of the Brazil office," who's effectively a sales guy, an ad salesperson, because of an injunction that they had. So yeah, this happens all the time.
Evelyn Douek: Yeah, but it was good to see the federal court affirm this principle of comedy and respect for sovereignty. I also love how realistic the judge was about Australia's global influence. One of the comments that the judge made was that with this order, [inaudible 00:45:15], most likely "the notice would be ignored or disparaged in other countries," which is, I think accurate. So it's like, "Aha, how cute. Australia's trying to enforce its orders globally." So it would've been futile and an affront of free speech, and so it was a good outcome. And basically that's it for our little wander Down Under, and that's pretty much it for the week. Any sports updates that we should be aware of, Alex, or it's on holiday until the fall?
Alex Stamos: So this graduating class of women who were playing in the NCAA and who made it a big interesting year have now entered the WNBA, and it's getting spicy. In that there's a lot of trash talking, but there's also a lot of physical violence on the floor, and men are eating it up. So if you wanted to know what would make men watch women's sports, it is them treating each other terribly, verbally and physically.
Evelyn Douek: Not skillful, gracious play.
Alex Stamos: There's great basketball happening too, I'm not saying that. The quality of the play is slowly increasing. It is not a massive step change, it's not like the WNBA has completely been revolutionized from a play perspective. And if you look at these players, like Caitlin Clark, she's doing well. She's doing very well for a rookie, but she's not mathematically out of bounds of any of the other stars in the league, but her persona coming in, and the fact that you have these veterans 5, 6, 7 years out of school, who are like, "Who are you?" It has made it a little bit interesting. So anyway, you're seeing a lot more WNBA highlights on SportsCenter. They're generally kind of lowlights, unfortunately.
Evelyn Douek: Well, that's disparity.
Alex Stamos: [inaudible 00:46:54] says that women cannot be as terrible as men. Let that be a lesson to you.
Evelyn Douek: Excellent. Well, with that, this has been your Moderated Content weekly update. This show is available in all the usual places and show notes and transcripts are available at law.stanford.edu/moderatedcontent. Please give us a rating and preferably a good one. We like it when you do that, so thank you very much. And this episode wouldn't be possible without the research and editorial assistance of John Perino, policy analyst extraordinaire at the Stanford Internet Observatory, and it is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman. Talk to you sometime soon.