Moderated Content

MC Weekly Update 4/17: TikTok Boom!

Episode Summary

Alex and Evelyn are joined by John Perrino to discuss Montana's TikTok ban bill, passed this week, and other things to watch on the TikTok front. Then they discuss why people's ideas to content moderate away the Discord leaks problem won't work; Substack's ill-thought-out thoughts on content moderation; a blog post from Twitter that seems to have slipped through a wormhole from the past; an important case the Supreme Court is hearing this week that could have big ramifications for protections against online stalking; and much more.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

TikTok Corner

Discord had a Week with the Leak

Substack’s (lack of) Content Moderation Plans 

Twitter Corner

Bot or Not

Arkansas’ Unusual Definition of Social Media

Legal Corner

Sports Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Douek:

Alex, we have the best fans. They take our show extremely seriously. Obviously in light of my embarrassing failure to be able to say the German law that shall not be named. Last week we had someone write it.

Alex Stamos:

Yeah, here it is.

Timo:

Hi Evelyn. Hi Alex. This is Timo from Berlin. I even have an [inaudible 00:00:19] in my last name. So I am absolutely qualified to help you with the pronouncer for the Nets DG. The German pronunciation for it is [foreign language 00:00:33]. I hope you can have fun with that. Bye.

Alex Stamos:

So we want to thank you Timo for saying that in, and this will become a standard part of the sound board whenever we talk about European law in the future.

Evelyn Douek:

It's perfect. It saves everyone from having to hear me try and butcher the German language, not make too many German enemies. Excellent.

Welcome to Moderated Content's weekly news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. And we are heading today straight to our TikTok TikTok. And to help us talk through this, we've got someone, you've heard his name in the credits and now it's time to meet him in the disembodied voice. John Perrino, which is Stanford Internet Observatory's own policy analyst. Thank you very much for joining us John.

John Perrino:

Thanks for having me on. It's nice to go from show notes to the podcast.

Evelyn Douek:

Yeah, you made it. Welcome.

Alex Stamos:

It's an upgrade and if you keep on doing a good job, we'll make you edit this thing.

Evelyn Douek:

The reward for good work is always more work.

John Perrino:

Whatever I've got to do to keep the job.

Evelyn Douek:

So tell us, I mean the first thing I want to know is, is keeping track of all the various bills and laws to ban TikTok basically your full-time job at this point, apart from occasional podcaster?

John Perrino:

It could be a full-time job. I mean, we don't know where the TikTok ban is going to go next to different state capitals, but my goodness, keeping track of state policy, 50 different states, hundreds and hundreds of policy makers. Yeah, it could be a full-time job.

Evelyn Douek:

Great. Okay. So tell us about the latest one then and the one that was making all of the head waves and headlines in the last few days. Let's go to Montana.

John Perrino:

Sure. Yeah. So in Helena on Friday afternoon, the Montana State House passed a TikTok ban. Essentially what the bill does is it bans app stores from making TikTok available for download in the state. And it would go into effect beginning of next year, but it's probably going to run against a whole bunch of different legal challenges. But it doesn't ban users. It bans app stores and would find them $10,000 for violations for making the app available. And what's really interesting is it actually previously would've banned internet service providers from allowing anyone to access it on their phone, on their laptop. That was actually stripped out. But the bill still does not allow TikTok to operate in the state. What that means, I don't really know. So it basically couldn't download it from a Google Play app store, hasn't been signed it to law, but more than likely will. Montana's has a Republican governor, Greg Gianforte, he's previously passed bans at the state government level against TikTok.

Alex Stamos:

Is he running for president? Like half the Republican governors in the country?

John Perrino:

We shall see, who knows, that's a...

Evelyn Douek:

With this popular policy, he'll have the youth vote short up.

John Perrino:

But basically this is the Republican backed bill, basically passed on state party line votes in both the House and the Senate. And it was a quick moving bill as well. It was drafted in January, formally introduced about two months ago in February 20th. And now it's already passed. That's really quick moving. That's a difficult thing about following all these state bills.

Evelyn Douek:

Great. And yeah, it's due to come into force January 1st 2024, I believe. Highly unlikely I think that ever will. One of the questions we were talking about before we started taping was we were saying that the app stores have argued that this is technically impossible to comply with, the idea that they can stop a single app from being downloaded in a single state. Alex, curious for your thoughts on that.

Alex Stamos:

Yeah, so I mean certainly not something can be prevented right now, but I think this is probably a losing argument for the platforms for Google and Apple specifically because certainly with time you'd be able to figure out a way to do it. I think probably their challenges is a couple fold. One, the app stores already treat locations differently and offer you different apps based upon location. And that I think is mostly done by Geo IP, is done by what your IP address is. Most famously Apple blocks a ton of stuff in China and the People's Republic of China that is legal everywhere else to be downloaded from the app store. But there are other examples of that and that's even something that app writers themselves can go in and when they upload their app, they can uncheck boxes for, I don't want my app to be available in these locations for a variety of legal reasons and such.

That Geo IP based system is pretty accurate for country barriers. So even if you're on one carrier like say in Europe and you're on orange or you're on Deutsche Telecom, if you go from Germany to France, you will probably change IP addresses over just based upon how these networks work with subcontractors and all that kind of stuff. That is not true in the United States. And so when you're talking about mobile devices, you're often talking about mobile networks. The location of this is a AT&T address in Montana versus Idaho versus wherever. Probably not going to be very accurate. And so your phone does have find GPS location, but for the most part that is not used by the app stores. And so if they wanted to do this, they'd probably have to go and violate people's privacy, grab their GPS location, go figure out exactly where they are, and then Geo block only within Montana. So I think it's possible, but it's not possible today. It would require a bunch of code to be written and would require them to reduce the privacy of their users.

Evelyn Douek:

Yeah, I got to say though, I think that this is probably going to be the trend. We are seeing so many state based bills and laws coming along, not just these kinds at the app store layer, but we've seen all of these content moderation bills, the Texas and Florida laws and things like that. And if some of those are upheld as constitutional, we are going to have a situation where we're going to have conflicting state laws about what these companies can do. And that hasn't been a problem yet, but it certainly could be going forward.

Alex Stamos:

I mean, it's sad to see us for all these laws, for the privacy laws, for the content laws, we throwing away one of the great competitive advantages of the United States when it comes to internet services, which is that we used to be a unified market. That you could build one product that operated in 50 states, that you had several hundred million people that you could service in a way that is not true and really anywhere else but say India. And I think that is actually really... In Europe, one of the reasons it is so hard to start a tech company there is while they do have EU wide laws, there's a ton of local responsibilities for any employer there and even for companies that operate from a content perspective. And so the regulatory... To end up in this world where if you want to start a company in the United States, you have to have a lawyer as higher number 10 or eight because you have so many state specific content moderation and other kinds of laws is just going to be a huge drag on the US economy.

Evelyn Douek:

So John, many of these bills are, won't you think of the children bills? My question is, won't you think of the first Amendment? Has the First Amendment been discussed at all in the conversation in Montana? Please tell me that the answer is somewhat yes.

John Perrino:

Well, I don't think you're going to like the answer because it is yes. But essentially what the Montana attorney general has said is yes and we're going to challenge it and try to create a new interpretation of the First Amendment. That seems kind of bonkers to me, but I'm not the lawyer, so I'm curious what you think about that Evelyn.

Evelyn Douek:

Yeah, I mean we've talked a lot about why I think TikTok bans are unconstitutional flagrantly so. This bill is not helping itself. This bill is just insane. So if you go and read it. We'll include a link to it. In the findings...

Alex Stamos:

It's got to be the first Montana bill that talks about NyQuil.

Evelyn Douek:

Yeah, it's so good. So in the findings, the complaint against TikTok is that TikTok fails to remove and may even promote dangerous content that directs minors to engage in dangerous activities including but not limited to. And then it lists a bunch of nuts things including throwing objects at moving automobiles, lighting a mirror on fire and then attempting to extinguish it using only one's body parts, smearing human feces on toddlers, licking doorknobs and toilet seats to place oneself at risk of contracting coronavirus, attempting to climb stacks of milk crates and stealing utilities from public places. And I hate to break it to the Montana lawmakers, but unfortunately content depicting people attempting to climb stacks of milk crates is definitely protected speech. I'm not going to say it's a significant contribution to democratic discourse, but it is something that we allowed to view without interference by the government.

Alex Stamos:

It seems like there's two big differences between this and the Restrict Act. One is the Restrict Act is specifically about national security. It is about privacy. It has been written by people who understood that there was going to be a First Amendment challenge. And so they explicitly kept a lot of the content based stuff. And here that just says the word content is right there in the first sentence.

Evelyn Douek:

Dangerous content, that all known unprotected category of speech, dangerous content.

Alex Stamos:

Yeah, most must just be bad stuff, right?

Evelyn Douek:

Yeah.

Alex Stamos:

But the second is because it's a state law, is there anything constitutionally that makes it less likely that this will survive, muster than say the Restrict Act or something passed at the federal level?

Evelyn Douek:

I mean, there's going to be the dormant commerce clause issues we were sort of talking about these before about the fact that if it's not technically feasible to just ban it in a single state and it causes too much of an interference with other states jurisdictions and speech in other states, then that could raise a dormant commerce clause challenge to this as well. But otherwise I think the First Amendment issues are basically the same and basically as grim for this bill as the others. But your mention of the Restrict Act reminds me, John, I wanted to ask you a bit about what's happening on the hill at the moment. We haven't sort of talked about it for a while, but the Restrict Act was barreling along the last time we talked about it. So what's going on now?

John Perrino:

Well, the last thing to really happen on the Restrict Act actually happened in the Wall Street Journal with an op-ed that the lead sponsors that the bill put out, basically arguing that yes, our legislation is okay under the First Amendment, but the fact that they had to publish something in the Wall Street Journal arguing, hey, yeah, our bill is legally okay kind of goes to say something. Because what we saw is this huge upswing in free speech and libertarian kind of opposition to any move against banning TikTok either outright or through some kind of a national security review. So in Washington right now it's kind of wishy-washy, there is a lot of momentum going into the TikTok hearing and now all of a sudden it's kind of fizzled out. So I think the big question is, is the Montana State legislation going to spur renewed calls in Washington? And I think it's anyone's guess at this point.

Evelyn Douek:

Okay. So anything else we should be watching or that you'll be watching for us on this front?

John Perrino:

Yeah, I mean I'm going to be watching to see what happens at the state level across the country. Like you said, I mean that's a full-time job. So there are lots of other groups that are looking out there and trying to find out what's happening. And these bills move really quickly as was previously said. And I think that...

Alex Stamos:

The laboratories of democracy, although it doesn't mean that there's not a mad scientist cooking something up in that laboratory.

John Perrino:

And I think the other thing to look at the federal level... Actually there's two things I think to look at on the federal level. One is dichotomy of bills like the Restrict Act, which don't outright ban TikTok. They take a wider approach to this of looking at any kind of technology with ties to foreign adversaries of China, Russia, Iran, North Korea, et cetera. And then there's the explicitly anti-TikTok [inaudible 00:12:04] with names like no TikTok on US devices in antisocial CCP. And I don't really see those moving even though the Republicans have control of the lower chamber.

Where I see this going is the big question of is the US going to try and force TikTok to sell or bite dance their parent companies with Chinese ties to sell? And is that even possible? And I think one small piece in the Montana bill that's really important is that if TikTok was forced to be sold off to a US company or really I think the way that the bill words it is any non-foreign adversary, then everything in the bill would be negated, right? So is that possible? I don't know, but I think that's really the big question and the focal point that we'll see into the debate going forward.

Evelyn Douek:

Awesome. Well thank you for keeping tabs on that for us and bringing us the update.

John Perrino:

Thank you both.

Evelyn Douek:

Okay, so Alex, last week we talked about the Discord server where a big league of US national security documents occurred and this week the platform released a blog post about its response to the incident. Now it's still pretty high level and vague because it says investigations are ongoing, but we're starting to get a little bit more color around the content moderation challenges here and the company's response. And I think there's sort of two distinct issues. There's the issue of intelligence documents being leaked and then there's the fact that this was a racist server with lots of racist memes that breached Discords terms of service, and the company wanted to make clear that both are against its terms of service and it didn't catch them. We've seen people sort of saying that, I don't know, the intelligence community or Discord should be more proactively monitoring for leaks like this. What's your take on it?

Alex Stamos:

So you hit the nail on the head, which is Discord is really dealing with a two front war now, that they've been part of these leaks. We should tell everybody who listens to this podcast knows that the leaker was arrested and there was no new indication that he's an intelligence plant or whatever it looks like the assumptions. And the Bellingcat report that this was all based upon showing off to his friends turns out to be accurate. So there were some people who are thinking like, oh, this is actually FSB or SVR and they're just trying to run it through Discord or whatever. But no, it turns out that that was accurate. So like you said, they're fighting a two front war here.

The first is on people making these claims that Discord needs to be monitored more or Discord needs to monitor for classified data. No platform monitors for classified data. Why? Because there's no way to tell if something's classified or not. So in this situation, you're talking about photos of pieces of paper that have markings somewhere in Times New Roman or Helvetica, it says TS/SCI, no foreign, a couple other codes for what compartment in the TS/SCI situation, what classified compartment these documents are in. That is trivially fakable.

It's also trivial removed if somebody knew that that things are being classified by those. And so there's nothing to hold onto, for even for a human moderator. If a human being looks at... Here's a piece of paper of a map of Ukraine and somebody marked TS/SCI on it of telling whether that's actually classified or not. And so if a human being can't tell, certainly you can't build algorithms I could tell. People come up... I've heard some crazy ideas and they are crazy about using some of photo DNA and other perceptual hash algorithms to do this kind of stuff, that would require the US government providing a perceptual hash for every classified document, which this is not going to happen.

Evelyn Douek:

I can't see any problems with that. How could that go wrong?

Alex Stamos:

We should go wrong in many, many ways. Partially that the government doesn't have one repository of classified docs. So you'd have to create, here is the place you have to break. It was like... You might as well put a huge neon sign, spies apply to work here.

Evelyn Douek:

[inaudible 00:15:46].

Alex Stamos:

And then the other is that perceptual hashes are not built for that threat model. A perceptual hash is supposed to be... So if you have a piece of CSAM, you can reduce it into a numerical set that does not immediately jump out to somebody as being CSAM. But people have demonstrated that, especially photo DNA. But even some of the more modern perceptual hashes with new ML generation systems, you can now recreate the images that come in. So that would be an incredibly stupid idea for the US government to publish photo DNA hashes or classified documents because you're just triggering a huge amount of research into doing that, into reversing. So short of something ridiculous like that there's really no good solution on the classified side.

But the other thing that Discord is trying to talk about here is I think, what they're realizing is that the fact that they're in the news and they're in the news for... As Casey Newton said, he's like, I Googled thug shaker and it turns out to not be great. It is a racist meme that you can have 20 or 30 teenagers effectively and young national guardsmen hanging out and they could get a little racist and they can yell anti-Semitic stuff while they're at the gun range and all that kind of stuff.

And Discord has an interesting trust and safety model in that they're kind of like Reddit in which if you run a server, you are mostly responsible for the moderation, but they do have a baseline level of moderation they do everywhere. So I've confirmed with them, they do child safety work. You can't opt out of doing child safety work, but you can decide that you're not going to look for certain kind of racist messages and hate speech and such. And I know this for a fact because my class is taught on Discord because my students can go build systems in which they can generate a bunch of hate speech and then they can build bots to detect it and we don't have to worry about Discord front running our students and detecting any of this stuff because they don't, obviously my students are not going to upload CSAM.

We use pictures of kittens versus adult cats. That's how we do CSAM. It's like a naked kitten from a classifier perspective, but short of that, you could get away with a lot if you're running a Discord server yourself and if everybody who's part of the Discord server is basically in on it and nobody is reporting. And so I think this is a smart move for them to put this blog post out, but they are going to have this challenge now going forward, which is every reporter in the world is trying to now infiltrate underground Discord. And the New York Times loves to run. We saw something online we did not like, and there's plenty on Discord that the New York Times is not going to like.

Evelyn Douek:

Only clothed kittens in Discord servers from now on.

Alex Stamos:

Well at least for CS 152, for our class.

Evelyn Douek:

I guess most people aren't opting out of the racist content moderation for academic purposes, I would be guessing. And then they have all of these other issues around audio content moderation being significantly harder than the Reddit text based content moderation as well.

Alex Stamos:

Right. They have audio, they have video too. You have to pay to upgrade the server. But yeah, I mean there's a lot, they've got a lot going on. And the fact that you've got lots of young people using it, meaning it's a real challenge. I think from my perspective, it is reasonable for them to let people run servers like this one. Yes, it sucks, but I don't think we should go around and police. And you're not going to make 17 year olds less racist by going and shutting down all their Discord servers. I think from my perspective, what they really need to think about is on the child safety side, because they do have so many teenagers, they do have so many young people, that is a reasonable area of focus is where adults are. In this case the kids were grooming the adult to leak them classified information, but you could see that grooming going the other way in a sexual context and that being really, really bad.

Evelyn Douek:

Okay, so huge news this week, Alex. Substack's CEO, Chris Best emerged from under a rock where he'd been for the last five years apparently, and answered questions on the Verge's Decoder podcast that showed he had somehow managed to be the last person on earth who hadn't heard about the content moderation headaches that come with being a platform CEO or hadn't really thought about them. This is in the context of Substack launching Twitter style platform Substack Notes.

And Nilay Patel asked him a bunch of questions about content moderation and Chris just basically refused to answer. So he was asked, we should not allow brown people in the country, would that be allowed on Substack Notes? And Chris replied, I'm not going to get into gotcha content moderation. To which Nilay replied, this is not gotcha, I'm a brown person. And Chris just said he had a blanket policy of saying, I don't think it's useful to get into, would you allow this or that thing on Substack? And this is in the context of the number one thing that a lot of people want around content moderation is transparency. And so if you can't even tell us what is or is not allowed within your rules, that's step one fail. But have you heard this interview, Alex, and what are your thoughts on it?

Alex Stamos:

Yeah, I've listened to the podcast. It's on Decoder with Nilay Patel. It's a fantastic podcast. It's a great podcast to listen to every week, but I strongly recommend our listeners, listen this week. It's about our interview with Chris Best. There's a lot in there. So I'm going to have to say I am not neutral on Substack right now and I'm not neutral for the exact same reasons that Nilay was pulling out of Best, which is that Substack has effectively given up on their responsibility to do any kind of content moderation, not just on the notes feature, but on their fundamental product, their newsletter product. Because they've got the theory and Best kind of explains this. That effectively all the bad stuff that happens online is driven by online advertising and driven by algorithmic feeds. And so because newsletters at least are not algorithmic and they don't have advertising, that effectively you can run a free speech platform and everything will be fine.

This is a theory that has been pushed by one of your old Harvard colleagues and it was like an irritating thing when I heard it quoted at me by academics and NGO folks who are just trying to make themselves feel better and trying to feel morally superior to anybody who's ever worked for an advertising supported product. But then it was just irritating. Now we're seeing it weaponized. This theory that everything comes because of online advertising, that is the only thing that has ever caused anything bad online is now being used by Substack as an excuse. Substack in my opinion, has a greater responsibility for the content they carry than the vast majority of platforms because they are directly paying the content creators. The entire model here is you give a credit card number to Substack, Substack keeps something like 20% or 30% and then they pass the rest of the money through. They do a direct ACH transfer of money into the bank account of the person who's writing the blog.

As of right now, one of the top posts from one of the top Substack's is a 42 minute video made by a Substacker about one of our colleagues that is just completely full of lies, that is intended to try to ruin her life, to try to make people think that she is this horrible person. He has called her one of the most dangerous people in America, which is pretty clearly a call for violence when you call somebody one of the most dangerous people in America. And it is 42 minutes of bullshit, right? Of complete and total misinterpretations of things she said and just making stuff up about her. He is being paid thousands of dollars a month by Substack. Substack is taking credit cards and then paying this man thousands of dollars a month. In the same way that the New York Times pays their employees.

And so from my perspective, Substack has a much greater level of responsibility because they are creating an economic model where this person can incite violence to one of my colleagues, can say all these things about her that are not true, and can make that an economically viable model because he is able to build an audience on Substack. So I'm just going to speak to people who work at Substack. If you're listening, one, your CEO is totally wrong. He's working off of a completely incorrect assumption about what causes abuse online and this is not going to hold. I have seen this over and over again. I saw it at Facebook, I saw it at Yahoo, we have seen it with CloudFlare, we've seen it with other companies that try to wipe their hands of the responsibility of content and it always ends in tears. And that is going to happen here.

Something horrible is going to happen because of Substack and it is going to be much harder for Substack to figure out what to do then than to have reasonable policies right now about the kind of content they are going to pay for. Again, they are paying people for content, they're paying them. Like the idea that advertising is the only bad thing that's ever existed is just completely and totally ridiculous. So I definitely think everybody should listen to this podcast Decoder and I think people should tell Substack whether or not they want this kind of content to be up there for pay.

And I think people who are Substackers who are writers on Substack are really going to have to think about whether this is a platform if they want to be on. When you look at the top 10 and you look at the kind of disconnection from the truth that a number of the people in the top 10 have, it's starting to get really, really... Not just kind of frustrating from a disinformation perspective, but actually dangerous when you have individuals who are going after individual people and trying to destroy their lives.

Evelyn Douek:

Yeah, and I mean one of the other remarkable things about the interview was that he just wouldn't own it. If you're going to be the kind of guy that just wants to let this stuff be on your platform, then at least own it openly and say, yes, we are. This is what Parlor or Truth Social or whatever did. They said, this is our values, this is who we are, this is what we want to promote and be. Whereas in this interview he just kept dodging the question and didn't want to say yes, this is who we are.

Alex Stamos:

Yeah, which worked fantastic for Mark Zuckerberg of kind of dodging of trying to not be responsible for content moderation. That worked out great. It's incredibly stupid for them to... You look at all of these examples and they're glomming onto this idea that if they don't do an algorithmic feed... That's a funny thing is that notes is effectively algorithmic, it's got a discovery mechanism built in it. So they have built a straight up Twitter competitor. They're going to have all of the Twitter responsibilities on top of the responsibilities you have as the publisher paid content.

Evelyn Douek:

One of Substack's finally tailored policies by the way, is we don't allow content that promotes harmful or illegal activities. So any content that promotes harmful activities, that's a very narrowly tailored free speech for all right there.

Alex Stamos:

Well you could tell from their policies that they haven't really enforced them because there's no subtlety at all, right? You look at Twitter's policies even now and it's incredibly dense. There's all this detail of what they exactly mean by these things. And Substack has these super broad things because they never actually enforce them.

Evelyn Douek:

Okay, so that's the perfect segue. Let's go over to our Twitter corner then. So Elon, I think is still CEO, although he did say in an interview last week with the BBC journalist that he's promoted his dog to CEO, so maybe he's technically upheld that end of the bargain to step down.

Alex Stamos:

This is kind of, I believe the king of Thailand made his dog the air minister or something like that. Yes.

Evelyn Douek:

So it might as well be though, given where we are. Although this morning Twitter published a blog post that I swear must have been set for delayed publication from the old guard, and it only just went live because Radiant is going through...

Alex Stamos:

I'm pretty sure [inaudible 00:26:31] Roth wrote the first version of this document.

Evelyn Douek:

It's amazing.

Alex Stamos:

It's out of a time warp. It just came out of a single...

Evelyn Douek:

Yeah, you're going through a wormhole through to 2022, July 2022, and you get this post that says, freedom of speech not reach, an update to our enforcement philosophy. And it talks about how they believe it's their responsibility to keep users on our platform safe from content violating Twitter's rules. And so therefore consistent with their freedom of speech not reach policy, they're going to add publicly visible labels to tweets identified as potentially violating our policies, letting you know that we've limited their visibility. And this is that click hole meme where it's heartbreaking. The worst person you know made a great point because this actually would be a really good policy to have much more visibility around when they're doing the kind of reduction in amplification because of a policy intervention. And they're saying they're working on the ability to allow people to appeal it when they get these reductions and these labels. All of that sounds great.

Alex Stamos:

It's fantastic. It's actually great. And the labeling stuff like you and I have talked about this, I think having transparency of when things are labeled and when their reach is limited is a fantastic step and absolutely appropriate. I have a theory here, which is there are a handful of remaining trust and safety people who have been highly promoted at Twitter who have benefited from the Musk world, but are now realizing this is going to be a horrible black stain on their LinkedIn. And so there's a line in Moneyball where the general manager asks the manager, why aren't you playing the team like I want you? And he says, I'm playing the team in a way I can explain during job interviews.

And I think this is what you're seeing is trust and safety execs, the ones who are left at Twitter playing the team in a way they can explain in job interviews that even if Musk doesn't like it, they need to now preserve their... They cannot be seen of going crazy, MAGA, being part of painting the W, of all the nutty stuff he's done. And so they're going to continue to... They're going to do their jobs as best they can until they get fired, which is pretty much inevitable. I think it's effectively being the head of content moderation for Elon Musk is going to be the number two at Al-Qaeda, right? It is not a job that you can hold for a serious period of time. You do not get a gold watch at the end of that career.

Evelyn Douek:

Yeah. Well, good work team. This is one of the best policy announcements we've seen from a trust and safety team in a little while. So we wait to see what enforcement is going to be like. But it seems good. Back on Earth-1 though, things continue a pace. So a trailer of Twitter CEO, Musk's interview with Fox News hosts, Tucker Carlson was released in which Musk says, that the degree to which various government agencies effectively had full access to everything that was going on at Twitter blew my mind. And when Tucker asks if this includes people's DMs. He says yes. So that's bananas. I don't know what to make of this claim that the US government is just reading people's DMs. Is it at all plausible, Alex?

Alex Stamos:

I mean, if that is what was happening then that was a federal crime. It would be a violation of the Stored Communications Act for the government to just read people's DMs. They have to do so under specific lawful process. And if that's true, then Mr Musk really should provide that evidence to a US attorney so that the former management of Twitter can be prosecuted under the Stored Communications Act. Do I think that's what happened? No. I think what's happening is that Musk has never really worked in a situation where you have to get search warrants. And the fact that there was an entire legal team whose job it was to take search warrants and wire tap requests and other kinds of lawful process around the world and who then had to provide that data was kind of a shock to him. And he is conflating that with some kind of grand conspiracy. But heads up Mr Musk.

It turns out when one of these platforms you end up buying a bunch of legal responsibilities that have been defined by Congress. He could also be talking about FAA 702, which does provide... A lot of people have complained about... Our mutual friend Jennifer Granick is a great voice on FAA 702 and that is up for renewal right now. If he has complaints about FAA 702, again, he should talk about that publicly and not just through random stuff to Tucker because it is important for Americans to understand how 702 is being used. There are supposed to be limits there. It shouldn't be used against Americans except effectively by accident. And so if 702 is being abused, then he absolutely should be saying that publicly.

Evelyn Douek:

But of course it's crazy and a witch hunt for the FTC to be investigating Twitter's privacy practices against any of this background. And so this week, the GOP subpoenaed chair, Lena Khan saying that the FTC has abused its statutory and enforcement authorities in investigating Twitter's compliance with its privacy obligations because everything is... Yeah, we've talked about this before and why it makes total sense that that investigation would be ongoing. There was a bunch of other stuff NPRs no longer posting on Twitter. It seems like Twitter is either resisting, ignoring or completely unaware of a bunch of take down requests in Brazil. Was there anything in particular that you wanted to cover about Twitter in the last week before we move on?

Alex Stamos:

No, I mean the NPR thing's not shocking. They don't like the state-sponsored label. A number of people have pointed out that today was supposed to be the day that SpaceX tested the Starship, which is a really, really cool rocket. That is much of the R&D of it was paid for by the US government. And so with 15 billion dollars in government rewards, I think SpaceX is much more state sponsored than NPR. So I'm looking for Musk to get the state-sponsored label himself so that we're fairly applying.

Evelyn Douek:

Consistency. Sounds good. Okay. So an undated document in the Discord leaks suggests that Russian fake account operators are boasting that they are detected by social networks only about 1% of the time that they set up these fake accounts, which would mean a whole bunch of undetected accounts. Now, there's all sorts of caveats here. Nothing suggests that these accounts are having an impact or it's entirely possible that these Russian boasting is not entirely connected to empirical reality or it's possible that these 99% are the long tail of accounts like Bot 19645 that is just liking posts without doing anything in particular. But were you surprised by this document and anything to say about it?

Alex Stamos:

Yeah, so this was really interesting. So one of the interesting parts about it is it talks about different groups outside of the progosion aligned groups that often get the focus from American intel teams. They're specifically talking about GLAV, NIVT as a part of the Russian government who then has contracts with a company called Fabrica, which is running the infrastructure itself. So like you said, it is hard to know. This is a document internal that the US is saying that... Obviously one, US intelligence agencies have penetrated this part of the Russian government very deeply. And so they're talking about how these people are communicating internally. Like you said, Russia is an authoritarian state, and one of the problems we continuously see in these situations is that you have a lot of motivation when you're internal to an authoritarian state to overstate how effective you have been in supporting the goals of the great leader Putin in this case.

And so just like the US was caught flatfooted during the end of the Cold War because Russia was internally lying to itself about economic stuff. And so I think a number of people in the intelligence community know that just because you're reading the other guy's email doesn't mean that people aren't being misleading in that email, that they have their own motivations. We do have to put a discount on these claims. That being said, I think it is totally accurate that Russia has lots of uncaught accounts. As Renee said in her Mastodon posts. A lot of them are probably the things that you use to promote other content. Of John nine numbers and such, which are the kinds of accounts you use to vote things up and to try to make things trend and such. But yeah, I mean, think it's an important thing.

The problem here is there's no evidence that the US government has actually talked to the platforms about this. So we're kind of back to the battle days of 2016 when you have TS/SCI documents talking about foreign influence campaigns where if none of that information is making it out to the platforms that can actually take steps, then there's no way to have a coordinated whole society response here. And so I do think this raises interesting questions about which of these things were briefed out to the appropriate people at the platform so that they can be looking for Fabrica specific bots and was there any technical component there?

If they're this deep inside the loop, then you would hope that the US intelligence community would have their hands on actual indicators and just one indicator of here's one VPN server or here's one phone number that is used, could be the kind of thing that could trigger a really good investigation by one of the teams of the platforms that do anti-IO work. And I think those are the questions that should be asked now is what did the government do with this knowledge, or is it just floating around on these internal documents that got put into burning bags in Massachusetts that a 21 year old was able to walk out with?

Evelyn Douek:

And I guess the politics around that right now are not conducive to the most...

Alex Stamos:

But it's a good reminder, and I hope it's a reminder that a number of people and especially the US House take, which is like America's adversaries are still building this stuff. As much as they want to paint the idea that Russia did absolutely nothing in 2016, has done nothing since. That's just not true. Every authoritarian state on this planet is trying to manipulate the internet. A lot of democratic states are trying to do it too, although generally with less aggressive means. And so these companies need to do that kind of work. And if you care about the US having a leg up on its adversaries, then you're going to want the US government to at least give heads up to the platforms of what's going on.

Evelyn Douek:

Okay, so in an update that is wonderfully reminiscent of something like Veep, this is hilarious. The Arkansas bill that we talked about last week to impose age restrictions on social media, it turns out no one's really sure what it covers. So there's this fantastic exemption in the law that kind of makes no sense at all. I'm just going to read it. So this does not include a social media company that allows a user to generate short video clips of dancing, voiceovers or other acts of entertainments in which the primary purpose is not educational or informative, does not meet the exclusion under another subdivision. So it seems to, although the wording is not entirely clear, exempt platforms that allow short video clips of dancing, voiceovers or other acts of entertainment, which seems to describe... What's the first platform that leaps to mind when you hear those words?

Alex Stamos:

Your favorite.

Evelyn Douek:

So TikTok, everyone's trying to ban it, but Arkansas is like you know what? We've got to make sure if they're not going to have access to it in other states, we really want those 16 year olds to be able to get through with no impediments in Arkansas.

Alex Stamos:

Arkansas needs their dances. They need to...

Evelyn Douek:

That's right. The bill sponsors believe that it should. I mean, it was intended to cover platforms like Facebook, Instagram, TikTok, and Snapchat, but unfortunately that might not be what they actually said in the law. One other thing is that it absolutely doesn't and wasn't intended to cover YouTube because there's a different exemption for companies that offer cloud services and get less than 25% of their revenue from operating a social media platform, which would apply to Google. And so therefore, it does not apply to YouTube because nothing bad at all ever happens on YouTube. I don't know how they manage this. It's amazing. I just got to tip my hat to that one for YouTube, honestly.

Okay. Can I get a legal quarter announcement please? Great. So this Wednesday, the court is hearing argument in a case that could have significant ramifications for online speech. I've talked about it briefly a while ago on the podcast, Kahneman v Colorado. It's a case about a man who over the course of two years, sent hundreds, maybe thousands of messages to Coles Whalen, a local musician that he didn't know. Whalen never applied, often blocked the accounts, but he just continued to send these messages. Some of them were threatening saying, fuck off permanently or die, don't need you. But many of them were simply confusing or mundane like I'm going to the store, would you like anything? Or random memes or frog emojis. Just completely delusional. It was clear that this guy kind of thought he was in some sort of relationship with Whalen.

Alex Stamos:

I mean, there's nothing creepier than, honey, I'll be home soon with the milk. And anything else I should get at the store? Actually, I could see that being pretty terrifying.

Evelyn Douek:

Fantastic.

Alex Stamos:

So I understand, for sure. It's like somebody who would probably not be the target of these, that would be pretty creepy for somebody to think they're in this parasocial relationship.

Evelyn Douek:

And so Whalen was understandably terrified and it caused significant emotional distress and a bunch of terrible ramifications for her life. This has been framed by the parties in the arguments and in the media as a case about threats. So were these messages sufficiently threatening? And did Kahneman subjectively intend to threaten Whalen, or was it enough that the reasonable person would've interpreted these as threats? But I along with Genevieve [inaudible 00:39:36] and Eugene Volokh have written an amicus brief to the court arguing that that's all wrong for a lot of the reason that you just said, Alex, which is these messages don't need to be explicitly threatening to be very scary and have negative ramifications. This is a stalking case as we argue, and he was in fact prosecuted for stalking, not for making threats. There's no constitutional right to send a barrage of unwelcome messages to a person that doesn't want to receive them, that contributes nothing to public discourse and is not great for democracy.

And so it's a category error to sift through these two years, thousands and thousands of DMs and say, can we find one needle in the haystack that was explicitly threatening? I should say to the listeners that our brief is in the minority. That is not the dominant way that this case is being talked about, but we're really concerned that if the court conflates the two, these threats issues with these stalking issues, it's going to accidentally eviscerate a whole bunch of online stalking protections with pretty dangerous consequences. So that's being argued on Wednesday morning this week, and we are just hoping for either a question from someone on the court or a footnote in the judgment to say, no matter what they do, we are not eviscerating stalking laws. So stay tuned.

Alex Stamos:

Should I be surprised that you're on the same amicus as Eugene Volokh?

Evelyn Douek:

You may be surprised, but it's also tactical because we want to get read by absolutely everyone on the court.

Alex Stamos:

I see.

Evelyn Douek:

So by having a spread of views or a spread of different people who are making this argument, we just want four or five justices to read the brief and find the argument persuasive. So we'll see. That's the strategy behind that.

Alex Stamos:

Right. So he's your Ted Olsen then? You're the Olsen boys of First Amendment law?

Evelyn Douek:

Sure. That is exactly what we were going for. Perfect. All right. And I think that's it, unless you had anything else to cover.

Alex Stamos:

So I have a sports update. The Sacramento Kings defeated the Golden State Warriors in game one of the NBA playoffs. For those who don't know, I grew up in Sacramento, which is actually a horrible place to be a teenager, but a great place to be a parent of a teenager. And one of the only fun things I got to do, is my dad had season tickets in Sacramento Kings, since they moved to Sacramento from Kansas City. And so I've personally probably seen 150 wins, 400 losses over the last several decades. And the Kings are finally in the playoffs again, and they beat Golden State, which is kind of my bandwagon new team, and they're playing again tonight. So it is an exciting game.

Those of you who... Even if you're not a big basketball fan, I do recommend tuning in because it is two very exciting, fun teams that are playing. And Sacramento is famously loud and crazy place to play, and you totally see that during this playoff series. The cowbells are coming out, which is a whole story about the LA Lakers and calling Sacramento a cow town and such, which Golden State has never done, but certainly the Northern California rivalry here is pretty awesome. And I hope this becomes a rivalry that exists now for years that you can have the Kings and the Warriors, just kind like the Giants and [inaudible 00:42:30] back in the day.

Evelyn Douek:

Awesome. We will have to do a podcast excursion sometime to watch a game.

Alex Stamos:

To Sacramento?

Evelyn Douek:

Yes.

Alex Stamos:

Well, we should. We should go to Sacramento and we can interview a bunch of state lawmakers about California's child safety laws and then go to game seven. Perfect.

Evelyn Douek:

I mean, hitting all of my favorite activities.

Alex Stamos:

I'm trying to figure out a way to get Stanford to pay for tickets, and I think we figured it out.

Evelyn Douek:

We've done it.

Alex Stamos:

So some of the tickets are going for... The tickets that my dad used to have, unfortunately, he's moved and given up the tickets are like 10K a pop now, and so we'd have to do a lot of sponsor reeds to get this podcast to pay for two tickets.

Evelyn Douek:

Wow. Well, if you've ever heard of a good cause, surely this is it listener. Write in and sponsor two people in need. And with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode literally wouldn't have been possible this week without the research editorial assistants and vocal sounds of John Perrino.

Alex Stamos:

The vocal stylings of John Perrino.

Evelyn Douek:

Exactly. Necessary element of the show would be a very weird show without that in there. And it is produced by the wonderful Brian Pelletier. A special thanks also to Justin Fu and Rob Huffman. See you next week.