Moderated Content

MC Weekly Update 5/8: Solving the Head of State Problem

Episode Summary

Alex and Evelyn discuss Utah's new age verification law going into effect and the state's completely unrelated spike in VPN downloads; the horrific footage of the Allen Texas shooting on Twitter and how the understaffed platform is coping with one of the hardest content moderation challenges; but it's okay, Musk is focusing on other really important issues. Also, Meta's adversarial threat report (and Twitter's lack thereof); the 9th Circuit decision in a jawboning case brought by RFK Jr against Elizabeth Warren; the Brazilian government's latest attempts to limit internet freedom; and Bluesky's solution to the head of state content moderation dilemma.

Episode Notes

Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:

Twitter Corner

Legal Corner

Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.

Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.

Like what you heard? Don’t forget to subscribe and share the podcast with friends!

Episode Transcription

Evelyn Douek:

We stopped asking for positive reviews unlike every other podcast because we don't want to be annoying. But I have a question for the single person that gave us a two star review, that went out of their way to just give us two stars. Who hurt you? We are hardly making it big. We know we don't have a whole bunch of POD podcast reviews, but thank you. Thank you for that.

Alex Stamos:

Yeah. Show me on this doll where the podcast touched you and hurt you.

Evelyn Douek:

Yeah, that's right. I won't consider us having made it until someone comments about my vocal fry. Then I know we've hit the big time.

Alex Stamos:

Oh my God.

Evelyn Douek:

You're very good at that.

Hello and welcome to Moderated Content, weekly, slightly random and not at all comprehensive news update from the world of trust and safety with myself, Evelyn Douek and Alex Stamos. Utah is the site of one of the funnest stories of the week. PornHub blocked all internet users in the state from accessing its pages in protest of an age verification law that went into effect this week that requires adult sites to verify that people accessing their sites are over 18. Instead of seeing porn, now users are met with a video message from an adult performer that talks about the law and says that until a real solution is offered, the website will be inaccessible in the state. In completely unrelated news, VPN downloads have spiked dramatically in Utah. So I think people are finding their way around that one. And in the meantime as well, the adult entertainment industry has filed a First Amendment challenge to the law.

So we will see how long the law is actually in place at all. They have pretty good precedent at their backs. This is not the first time that states have tried to do this and generally, they have been struck down on First Amendment grounds. So we will see. One thing I'm curious about, Alex, we talked a couple of weeks ago about state-based geoblocking. I think we were talking about it in the context of TikTok bans out of Montana and there was this right move from the industry to say, "Look, this isn't even possible. We can't ban things on a state by state basis." And that was one of the ways in which they were lobbying against the laws. And I'm curious if this is a bit of a tell, the fact that that PornHub is doing that in this case, whether that tells us anything about that.

Alex Stamos:

Yeah. So it is not impossible to try to block on a state by state basis. It is impossible to be a hundred percent accurate. As we talked about that time, that was a slightly different context in which you're talking about app stores and in which case the app stores already have complex systems to do blocking on a country by country basis. Both of the decisions made by the app store runners themselves, Apple and Google more specifically, or by the people who are publishing their apps that they can choose where it is. And those are pretty high resolution just based upon how internet IP space is often reasonably distributed on a per country basis. But AT&T, when they distribute IP addresses about when you're on a mobile device, knowing whether you're in Utah or Colorado or Nevada or anywhere in the general Western United States, is not how they do it.

And so I do think it is possible you can get GeoIP databases. They are famously bad, the famous stories about these GeoIP databases as if they do not know where you are exactly in a country they put you in the geographical center. They return a result in the geographical center. And so as a result, there's like this poor family in Kansas who live apparently in the geographical center of the continental United States, who are constantly getting their house rated and served with subpoenas and such, because every GeoIP lookup that doesn't know exactly where they are returns their farmland. And so just that's what you're dealing with here. And so the idea that you could block accurately is wrong, but you'll catch maybe 90% or so. I'm just going to spitball 85, 90%, if you try to block an entire state by doing GeoIP. And that's probably what you're seeing here and what's driving the VPN downloads. You can totally see Utah as a state where legislators might presume that nobody likes to look at porn. And it turns out the actual express preferences of individuals turns out to be different.

Evelyn Douek:

Right. It's okay though because next step, they can ban VPN downloads from the state and see how that goes. And then they'll ban the app stores that allow VPN downloads from the state and so on and so forth.

Alex Stamos:

So that would be the People's Republic of Utah option.

Evelyn Douek:

Right, exactly.

Alex Stamos:

Would you say that would pass a constitutional test as a professional?

Evelyn Douek:

I honestly feel like I'm being gaslit with all of these laws, 'cause I feel like all of these issues were decided in terms of being beyond the First Amendment pail. But clearly, other people are counting noses and seeing different things now. So I don't know. It worries me a little bit how confident a lot of these lawmakers are, passing these things that seem pretty far beyond the First Amendment pail, so we'll see.

Alex Stamos:

It seems like they're trying to play the same playbook that happened for abortion, that with the new Supreme Court, a lot of people thought this is the chance to get rid of Roe and they pass laws to specifically target it. I don't see a six to three majority for allowing individual states to ban apps. It's just not the thing that I think that the Supreme Court has actually moved on. In fact, a bunch of these cases, as we talk about on the podcast all the time, go the other way, of making it much harder for both states and private industry to take down speech.

Evelyn Douek:

Yeah, I completely agree. I just don't think ... I can understand why maybe states want to have a go given change composition of the court and think that there's something worth trying. But to my mind, these are pretty simple cases. But we will see. And then if I am wrong, here is a nice recordable, re-playable clip of me making full of myself. Okay, moving now to the Twitter corner.

Okay, so the big and sad story this week, of course is the footage from the Allen Mall shooting. And of course, because everything is on the internet, everything is a content moderation story ... And the story, the Twitter link here is that viral videos of some of the carnage have been circulating on Twitter. And Twitter has come in for a lot of criticism about this, and it's been tied to speculating that part of it is reduced staff, reduced resources behind content moderation. And I'm curious if you've seen anything, Alex, that would suggest that that is the cause of this. I'm reminded of several other incidents of course, tragically over the past half decade or so. The Christchurch massacre, the Buffalo shooting, where we've seen this kind of thing happen on social media before and every time, the social media platforms are criticized. At the same time, this does seem like it was fairly prominent on Twitter. Do you think that part of this is just reduced resourcing at Twitter?

Alex Stamos:

I think so. It's reduced resourcing, but to the background of a very complicated policy situation. So just so people understand, this whole world, this area changed during the Christchurch shooting, because the Christchurch shooting was live-streamed on Facebook Live. Only about 20 some people saw that, but enough of them recorded it because these people had been tipped off on 8chan that the shooting was coming, that they were able then to re-upload the video. In that case, the video was shot by the gunman. It was a first person view of him killing innocent Muslim people. And the goal was to try to trigger race war effectively. He wanted to have copycat shooters, he wanted to have violence against Muslims and other people he considered invaders. So the companies came up with policies coming out of that around preventing violent video, video of shootings and the like.

And then they had to make it more nuanced because it turns out the only reason you're posting a video that shows the outcome from a shooting is not to celebrate, it is not to get copycats. It can also be because you're criticizing it, you're criticizing gun laws. A lot of the content I've seen around this is people saying, "Look at the honest unvarnished truth of what our gun laws get us", especially as applied in Texas because this was a shooting in Texas. You can use it against the groups. This guy looks like he was a white supremacist. And there have been even organized groups such as human rights groups that operate in the Middle East who have criticized the platforms for over-censoring here, because from their perspective, they want people in Syria, for example, to be able to post videos of atrocities that are happening in that place.

So it is complicated for the companies to come up with these policies. And so as a result, the application is very human based. Yes, you can go and classify for does this have blood, does this look violent and classifiers exist for that. One, I think those classifiers might be breaking here because that might be when things that's happening, we're seeing that over and over again, is that systems, asynchronous content moderation systems and Twitter, seem to not being able to keep up with the load. And as a result, that might make it harder. But you also need human beings to then look and say, "Is this person celebrating the attack or not or are they criticizing it?" At a minimum, you almost always, and Twitter has done this historically, blur out the really violent images and you put them behind the various content restrictions you have.

In Twitter's case, you can click a button to un-blur the video or to click through it. Sometimes there's warnings of this is an extremely violent video and you have to be logged in to see it. So it can't be non-authenticated kids and such. And that doesn't seem to be happening. And so I do think there is a systems and person failure here. But from a policy perspective, I just want people to keep in mind that this is actually quite complicated of what kind of speech do you want to allow around these violent incidents?

Evelyn Douek:

And I think that that's one of the big progress points in the past few years, is getting out of this false take down binary of content moderation to deal with exactly these kinds of difficult things, where you have genuinely disturbing horrific footage that many, many people don't want to see, that has some news or other public interest value or reason why it maybe shouldn't just be completely eradicated from the internet. But there is a big difference between just accidentally slipping across something that you won't be able to ever get out of your head again as you're scrolling mindlessly through social media, and consciously choosing to see that footage or being aware that it's there when it's behind a warning screen.

Alex Stamos:

So it is a tough situation, but it does feel like Twitter is falling down versus what they used to do here. You shouldn't have to scroll through Twitter and see all this violent imagery unless it's something that you've intentionally want to look at. If you intentionally want to see it, I do believe it should be up because I think this is an important part of ... This is something I said to my students at the beginning of my class, the trust and safety classes, that I'm going to treat them like adults. And part of being adult is dealing with the reality of the world as it is. And so that doesn't mean people should be forced to, but if you care about these issues, you should have the ability to see the reality of what the political choices we make as a country, what it causes.

Evelyn Douek:

Well, importantly though, Musk is focusing on the real issues at Twitter. So he is-

Alex Stamos:

As is his ... I think this whole podcast, we could just rename it. Elon Musk is paying attention to the record.

Evelyn Douek:

Exactly. So you really got to lead from the top. So Musk is individually emailing NPR reporters to ask them if the NPR Twitter account is going to start tweeting again, which it stopped tweeting in protest of being falsely labeled the state affiliated media, or whether the platform should reassign its handle. I'm sure that there is lots of stiff competition for the at NPR handle.

Alex Stamos:

Lots of legitimate uses for that other than the national public radio.

Evelyn Douek:

And he's like, "Well, I've removed the label. I don't understand what the beef is anymore." Not understanding that he has created an environment where it's just brand unsafe at the moment to be on the platform and not understanding that eroding trust doesn't just come back magically if you stop trolling your users for a day or two.

Alex Stamos:

Right. Remember when he did this right at the beginning too, that when advertisers stopped advertising, he called them out for abuse?

Evelyn Douek:

Oh, yeah.

Alex Stamos:

And so it's like if you're the CEO of a company like this, here's just a little tip: You do not want to make people believe that if they interact with your platform that if for whatever reason they decide not to, maybe their marketing budget changes or their social media manager changes, that they're running the chance that you're going to send tens of thousands of abusive people after you and perhaps trigger a boycott. It's just not the like you said, brand safety, the feeling that brands want to get when they use a platform.

Evelyn Douek:

Right. Well, at least our benevolent leader has changed his mind and allowed emergency services free API access, noting that lots of really one of the really core valuable public use cases of the Twitter API has been public verified government or other services giving weather alerts, transport updates, emergency notifications, of that kind of thing. And many of them stopped doing that when he started charging an obscene amount for access to the API that would've made that possible. This PR disaster could have been foreclosed if he'd paid attention to literally absolutely anyone that had talked about the API issue maybe two months ago or whenever this circus started. But at least he has finally caught up on that one, that the New York subway system isn't going to pay and it might be useful to have them using your platform regardless. And over to Meta. So Meta released its quarterly adversarial threat report and you said there are a couple of interesting things in there, Alex.

Alex Stamos:

There are. So as is normal, they talked about a number of coordinated inauthentic behavior, your favorite term networks, of networks that are trying to manipulate the platform overall. And we see, once again, your standard folks here. Iran, China, are specifically called out. Actions that look like from India. So that's always interesting, of actual offensive actions from India attacking people in Pakistan, India, Bangladesh, Sri Lanka, Tibet and China. So there's some both disinformation things happening as well as offensive operations. Nothing surprising there. What was really interesting though is they did add in here that they took down 24 Facebook accounts, 54 pages and four accounts on Instagram, for a network from Venezuela and the United States targeting people in Guatemala and Honduras. And so this is another great example of it looks like politically interested actors in Latin America hiring an American company to help them manipulate Facebook. In this case, they specifically figured out the company. It's called [inaudible 00:14:13]. I think that's how you pronounce it. It's a Florida company.

On their website, they advertise that they are a social media monitoring service, which is a legitimate thing that exists, that to go and look for it. But like a number of these companies, they have now been caught actually doing their own disinformation network. So in this case, Facebook has banned that company from all of their services and said to cease and desist. So because they're in the United States, that becomes the standard here, to C&D those kinds of actors. But it is interesting to see that there are American companies that are selling their services to do this stuff around the world.

Evelyn Douek:

I'm just having this image of taking a meeting with this social media monitoring company and being the raised eyebrow of, "Would you like our premium services? If so, come this way."

Alex Stamos:

Yeah. But as it relates to the Twitter corner, what did we miss this quarter? Well, Twitter, once again, did not put out their own network on their own data. The odds that these people were operating only on Meta platforms and did not touch Twitter, I would put near zero. So for each one of these networks, almost certainly their networks are still up and running on Twitter without any prevention. And so if you care about the US trying to manipulate the other democracies, you should care about that.

If you care about Iran and China trying to manipulate the world, you should care about that as well. So yo, for everybody, of every political persuasion, I think this is a great demonstration of why you should not. Certain people whose names maybe rhyme with [inaudible 00:15:36] or something like that, who talk a lot about and who do not, the fact that left leftists are attacked by right wing groups in the United States. Even those people who believe Russia did nothing in the 2016 election should back Twitter. Having a group that does this kind of stuff. Anyway, don't have to belabor the point. But this continued belief at Twitter and in people who are supportive of Musk, that all of this manipulation happens on one global political side, is just not true and not backed by the empirical evidence.

Evelyn Douek:

Well over to the legal corner, for another popular bug bear of that particular part of the discourse. So this week, the Ninth Circuit delivered a decision in a case where a bunch of plaintiffs, including Robert F. Kennedy Jr, were suing Elizabeth Warren for a letter that she sent to Amazon CEO, saying that Amazon should modify its algorithm so they're no longer directing consumers to a book titled The Truth About COVID-19: Exposing the Great Reset Lockdowns, Vaccine Passports and the New Normal, for which RFK Jr wrote a forward. And so the argument here was that this was a violation of plaintiff's First Amendment rights because Warren, which she's a government actor constrained by the First Amendment, was trying to coerce Amazon into stifling their protected speech. And this letter that Warren had posted on her website is a part of a press release, saying Amazon is peddling misinformation about COVID-19 and telling them to remove it from their recommendations.

And they wanted an injunction saying Warren should remove this letter from the website and issue a public retraction. But the court said no, that this is not the kind of coercion that the First Amendment prohibits, saying this is quite clearly a public political speech. It's absolutely true that government actors should not try and jawbone platforms into taking down protected speech. But this was just not the kind of thing that gets close to the line. And the court dealt with this pretty quickly.

As we always talk about when we come across these issues, there are tricky issues here about when government actors and platforms should be communicating about the kind of content that's on their services. We've talked, this again, happens on both sides of the aisle. For example, Trump's team texting Twitter to try and get Chrissy Teagan's critical tweets removed. But this public political speech of Warren's, is not, the court said, the kind of thing that is constrained. I personally don't think that politicians should be texting or posting platforms to try and get them to remove or downgrade particular pieces of content. But the court's saying it's not unconstitutional. Okay, and over to Brazil.

Alex Stamos:

I don't have a sound for that. Sorry.

Evelyn Douek:

You don't have a sound for Brazil? Ole, ole, ole. I don't know.

Alex Stamos:

That's a good one. And this will be up. That's great. We should get a World Cup sound. That would be-

Evelyn Douek:

We should. All right, so we have been meaning to talk about Brazil for a while, so thank you to Jake for writing in to prompt us to do it this week and to my student, Alan, for helping me learn a bit more about what's been going on, given that actually this is dramatically under-reported in the English language media actually I think, given the craziness of what's going on over there. So there's this big showdown happening between the government and the tech platforms. The government's trying to push through a law that it has titled the Brazilian Law of Freedom, Responsibility and Transparency on the Internet, which are all good things. That sounds great, but it's also becoming known as the fake news bill because there's a bunch of stuff in there that we have seen in a bunch of other places that as you know, would result in a whole lot of speech suppression.

So I'm relying on a rough and dirty translation of the bill. But there's a lot of stuff in there that you might compare to things in the European DSA that we've talked about, like systemic risk assessments and transparency reports, due process for users, external auditing. But there's also a whole bunch of other stuff. So there's a duty of care obligation that requires providers to proactively monitor for illegal content and fight against the dissemination of that content, and they risk losing their intermediary immunity if they don't get rid of it, don't uphold that duty of care, whatever that means. And we know from many other countries and lots of discussion about these kinds of laws, exactly what that incentivizes, which is that platforms will over remove a whole bunch of legal speech, in order to avoid getting in trouble. But it also means that government actors have this powerful tool of leverage to get them to remove things that they don't want to.

There's also a whole bunch of stuff in there, including payments to journalism providers and a provision for court review of any platform moderation of governmental accounts, with the court being able to order that the platforms reinstate governmental accounts if a platform removes it within 24 hours. So far, there's this bill. It looks like a whole bunch of bills proliferating around the world in terms of pressuring platforms, but there's also been this whole political fallout around it, so there's been huge industry pushback. And Google, for example, placed an ad on its search homepage in Brazil, at which point the government and the judiciary got really angry about this and alleged that platforms were improperly interfering with Brazilian politics.

And the Justice Minister ordered Google to take the ad down, said that they had two hours and that they would face fines of nearly $200,000 per hour if they didn't take it down. And sure enough, Google did take it down. The bill at this stage has been withdrawn because it didn't look like they were going to get enough votes to get it through. But who knows what's going to happen with this, who knows what form the final bill will take. But it's clear that there's a lot of momentum around this kind of thing happening in Brazil at the moment. And given the size of the market and the importance of it, this is really something that we should be watching.

Alex Stamos:

Yeah, what's depressing about this is it demonstrates that there's a bipartisan push for a much more controlled internet in Brazil. And that these kinds of ideas that we have traditionally attached to right wing populists from Modi and Bolsonaro and such, once they open the Pandora's box, of we are going to use the internet to suppress our political enemies, then that's something that is of interest to everybody. And I think this is ... Larry Diamond, our colleague at Stanford who talks about the death of democracies all the time, talks a lot about how some of the things that kill democracy are when people believe that the results of the election will not be upheld, that voting is useless, or if they believe that every political leader believes that once they're out of office that they will be arrested.

And we've got that happening in Brazil, that Lula went to jail during Bolsonaro and was released and now there's investigations into Bolsonaro. So as political parties switch positions in Brazil, the standard thing's going to be to arrest your political enemy. And I wonder if Larry would be interested in adding something about internet censorship to that as well, as a demonstration of once you open up that idea, that, "Oh great, I can win by controlling the speech of my political enemies, that that's an idea that you can't tuck away even if the politics of the country change."

Evelyn Douek:

This seems to be a pretty standard step in the toolbox at this stage. And yeah, I think it's actually really disappointing that I haven't seen more coverage of it over here and in the media. I don't know if you've seen much about ... Whether it's just my bubble, but this seems like a really important story and something that we should be talking about much more. Okay, and then one of my favorite stories this week, we talked about Bluesky, the decentralized Twitter alternative that took off last week. And I just love this solution that they've found to this problem that plagues most social media platforms as they start to take off, which is what do you do with the accounts of heads of state, which both are really in the public interest and lots of people want to see, but can be very problematic?

And there's all these difficult political issues about how to moderate heads of state if they have a chance, just hypothetically, do something that you don't necessarily want on your platform. Well, Bluesky has solved the problem, and that is that they are just not allowing heads of state onto their platform at the moment, which is just ... Someone, should give them a gold star. This is genius. They say that, "While it appreciates everyone in everyone's enthusiasm in sending invitations, its current policy is that we cannot accommodate heads of state to join us on our beater yet." So whoever invited Obama, sorry. Not yet is the platform's rule.

Alex Stamos:

Yeah, so this definitely has the benefit of never being tried before. So it was one of the solutions of explicitly banning heads of state. You've had explicitly banning Nazis and explicitly banning disinformation actors. But this is a new one for me. I don't think it's dumb. I think it's smart and it is good that Bluesky is starting to recognize that they ship too early. You can start to see the dawning realization of, "Oh my God, what we get ourselves into by going live before we had our policies in place, a trusted safety team in place?" Probably the basic technical plumbing, they didn't know how to have blocking in place and such. And so yes, I think it is a totally reasonable thing. It probably has to do with just basic scaling issues as well, is that right now, nobody on Bluesky has a gazillion followers.

And so having to deal with the fact that you have a handful of people who have a double-digit percentage of your entire network subscribing to them is always an interesting, difficult, technical challenge for companies where they actually have to, in some cases, build completely different data paths for how you handle those people and a bunch of things break. My favorite example of that from Facebook is there's a period of time in which you could not block Mark Zuckerberg anymore on Facebook. And so there's all these conspiracy theories of like, "Oh my God, Mark Zuckerberg has changed the rules that he's the only person you can't block." No, it turns out Mark Zuckerberg is the most followed person on Facebook, had the most friends, and his account would break everything all the time. And in this case, there is a in-memory data structure that had a very hard limit.

This in-memory data structure that is kept up to date across several million computers at once, which is a challenging computer science problem to do that. And just based upon the size of it, it is what tracked who is blocked by whom, and you could no longer have all of the people listed who have blocked Mark Zuckerberg. And so fixing the problem required a huge amount of technical plumbing to be changed and effectively a reboot of almost every computer at Facebook. And not a reboot of the whole computer, but a restart of the application servers on all these systems. And that was fine, but you just run into this problem over and over again where if you have asymmetric accounts that push this kind of stuff, it becomes really hard for you, from a technical perspective, and it is smart for them to try to push that off a little bit.

Evelyn Douek:

That's fantastic. Although, when the day comes that we are all compulsory have to follow Elon Musk and find ourselves being unable to block him, and they say, "Oh no, sorry. We promise it's a technical problem", I don't know that I'll believe them. It looks like that day is coming into view at this rate. And that's all I had for this week. Any highs, lows? How's your sporting emotional journey this week, Alex?

Alex Stamos:

It's not going great. Unfortunately, because the Kings are out. I have to now root for the Warriors, which is a great team to have as your bandwagon team. You might as well have one that wins all the time. But unfortunately, they're not winning right now. The Lakers, who I absolutely despise. Hate the Lakers. They cheated to beat the Kings back in 2002. They called Sacramento a cow town leading to everybody bringing cowbells to every King's game with the Lakers. Unfortunately, the Lakers are now leading two to one and are looking really good for this series. So really hoping that the Warriors pull the sucker out. But yes, there's your sports update for the people who listen to this podcast, but don't care about sports enough to actually just watch the games on ESPN or literally get their sports news by effectively any other mechanism.

Evelyn Douek:

That's right. This is a hundred percent of my sports news intake for the week. Happens in our closet on Monday mornings. It's great. I get the emotional journey compressed into a very small timeframe and I find it useful. Thank you.

Alex Stamos:

Right. Effectively we'll turn this segment into Alex helping Evelyn have sports ball things that she can talk about with other people on campus. She's in America now. She has to make small talk with people, and now she knows about the Warriors and the Lakers,

Evelyn Douek:

Right, so how about those Lakers, huh? He's going to get me through this week.

Alex Stamos:

No, the other one. No, you're doing the wrong one. You're a Warriors fan. Okay. We're just going to have to do an entire ... I'm going to have to make up a grid of who you like in every single-

Evelyn Douek:

Every single permutation. Well, that's right.

Alex Stamos:

Yeah. So you can memorize that.

Evelyn Douek:

Something I can study because that is the way I relate to people, is by studying grids. Gives me small talk. All right. And with that, this has been your Moderated Content weekly update. This show is available in all the usual places, including Apple Podcasts and Spotify. Show notes are available at law.stanford.edu/moderatedcontent. This episode wouldn't be possible without the research and editorial assistance of John Perrino and is produced by the wonderful Brian Pelletier. Special thanks also to Justin Fu and Rob Huffman. And how about giving us a non two-star rating this week, listeners? That'd be great.